0

The SPL linker script defines two segments:

arch/arm/cpu/armv8/u-boot-spl.lds

MEMORY { .sram : ORIGIN = IMAGE_TEXT_BASE,
        LENGTH = IMAGE_MAX_SIZE }
MEMORY { .sdram : ORIGIN = CONFIG_SPL_BSS_START_ADDR,
        LENGTH = CONFIG_SPL_BSS_MAX_SIZE }

Some boards, for example imx8ulp_evk, The address of SPL BSS is defined in SRAM:

#ifdef CONFIG_SPL_BUILD
#define CONFIG_SPL_STACK        0x22050000
#define CONFIG_SPL_BSS_START_ADDR   0x22048000
#define CONFIG_SPL_BSS_MAX_SIZE     0x2000  /* 8 KB */
#define CONFIG_SYS_SPL_MALLOC_START 0x22040000
#define CONFIG_SYS_SPL_MALLOC_SIZE  0x8000  /* 32 KB */

But other defined in SDRAM.

What's the difference?

  • See [what is the use of SPL](https://stackoverflow.com/questions/31244862/what-is-the-use-of-spl-secondary-program-loader) – sawdust Feb 12 '22 at 05:38

1 Answers1

0

The SPL linker script defines two segments:

The SRAM segment is for the memory occupied by SPL text, data, and stack.
The SDRAM segment is the destination memory for the image loaded by the SPL.

SRAM (static RAM) does not require any initialization prior to using it. After a system reset, the processor can immediately use (load/store) SRAM. Therefore some SoCs (i.e. MPUs rather than MCUs) integrate a small chunk of SRAM into the chip as "always available memory" for the CPU. There are other attributes that differentiate SRAM from DRAM (e.g. speed, die size, cost), but are irrelevant to your question.

SDRAM is a type of dynamic RAM (i.e. synchronous DRAM), memory that requires a refresh controller that needs to be initialized (by software) before use. The DRAM is typically external to the SoC for board design flexibility. Beware that some/many people insist on referring to the DRAM by the more-specific implementation technology, e.g. "DDR2", "LPDDR3", or as you did "SDRAM". Regardless, the salient property of such memories is that it's dynamic RAM and there's a DRAM controller to perform refresh.


Before U-Boot can be loaded into main memory (implemented using some type of dynamic memory, e.g. SDRAM, DDR3), a previous boot program must perform that DRAM controller initialization. That's the rationale for existence of the (internal) SRAM and use of an (additional) SPL program.

In theory the ROM boot program could perform this DRAM controller initialization. But that assumes that all of the necessary parameters (of external memory chips) are somehow available to this ROM boot program. To cope with different board configurations, such DRAM parameters would have to be stored in a NVM somewhere. But such a scheme is atypical for ARM SoCs; ARM SoCs try to avoid the inflexibility of (masked) ROM by deferring more board initialization tasks to (board-specific) 2nd-stage and 3rd-stage boot programs.
Note there are also other boot schemes, for instance the use of the XIP capability of NOR flash.


Some boards, for example imx8ulp_evk, The address of SPL BSS is defined in SRAM:
...
But other defined in SDRAM.

(You neglect to provide any example(s) for the later case.)

The SPL code would be loaded into SRAM by the ROM bootloader (of the SoC), and also use SRAM for the initial stack & data. But the SPL functionality is bisected by the initialization of the DRAM controller.

The early stage of SPL should not depend on the availability of BSS, so code should use only stack variables and global_data, and not use static/global variables.
Once the DRAM controller is initialized, the BSS is cleared and available, and all static/global variables can be used. Even the stack can be optionally be moved to DRAM. SPL can then perform its primary function, and load the U-Boot image (e.g. u-boot.bin) from the designated NVM to main memory (DRAM).

These code restrictions are documented in the U-Boot README file, the Board Initialisation Flow section.


What's the difference?

The primary difference would be available memory size. The code restrictions on how the BSS section can be used in SPL code (as well as U-Boot itself) allow the BSS to be defined in either static or dynamic RAM. Since the internal SRAM is limited in size, and external DRAM is board dependent but presumably much, much larger, that may decide the assignment.

The Kconfig entry for CONFIG_SPL_SEPARATE_BSS in common/spl/Kconfig explains the reason for this option:

config SPL_SEPARATE_BSS
    bool "BSS section is in a different memory region from text"
    help
      Some platforms need a large BSS region in SPL and can provide this
      because RAM is already set up. In this case BSS can be moved to RAM.
      This option should then be enabled so that the correct device tree
      location is used. Normally we put the device tree at the end of BSS
      but with this option enabled, it goes at _image_binary_end.
sawdust
  • 16,103
  • 3
  • 40
  • 50
  • Thank you so much. The board of imx8ulp_evk selects to use BSS in SRAM. On the other hand, rk3308 use BSS in SDRAM(relocated from SRAM to SDRAM). What the considerations based on? and What the different advantages and disadvantages? – Bobby Yang Feb 14 '22 at 03:46
  • The "imx8ulp_evk" is a board, whereas "rk3308" is a SoC. So you have still not provided a comparable example of a board. *"What the considerations ...?"* -- As already answered, the salient difference in using SRAM versus (S)DRAM for SPL BSS is the available amount of memory. – sawdust Feb 15 '22 at 00:32
  • I mean the board [evb_rk3308](https://source.denx.de/u-boot/u-boot/-/blob/master/include/configs/rk3308_common.h) – Bobby Yang Feb 16 '22 at 02:23