Wednesday, 11 April 2018

Combining the Linux Device Tree and Kernel Image for ARM

The Device Tree takes the place of the kernel command line, but does much more by providing information about the hardware that is present in the system. This creates a separation between the kernel source code and the hardware such that the list of hardware can be modified without requiring the entire kernel to be modified. This is a very nice feature for virtual platform developers as it is often possible to just remove hardware descriptions from the device tree as the models are being developed, or if they have problems.
Like the kernel and file system, the device tree can be loaded into memory using the same SystemC loader model. We have been doing this with the Cadence Virtual System Platform for a couple of years now. If you are interested in examples about using the device tree, I covered it in an article about running a Linaro filesystem on the Zynq Virtual Platform last year. Although it's easy to forget, the kernel source tree also has documentation. For the device tree, look in the Documentation/devicetree directory of any kernel source tree.
The most recent development I wanted to cover today is support for combining the kernel image and the device tree into a single file. It turns out that some bootloaders may have trouble getting the device tree into memory, or maybe it's just easier to eliminate dealing with the extra file on the target system.
I have used the appended device tree feature on the ARM Versatile Express platform with kernel versions in the 3.9 to 3.11 range. Using it is straightforward. First, make sure the kernel configuration has the feature enabled. It is in the "Boot options --->" menu and called "Use appended device tree blob to zImage" as shown in the screenshot below.
ARM Versatile Express Boot Options
Preparing the zImage is easy -- just cat the compiled device tree .dtb file to the end of the zImage:
$ cat arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dtb >> arch/arm/boot/zImage
That's it -- the need to separately load the device tree into memory is gone. Just the zImage can be loaded into memory and the kernel will automatically find the appended device tree and use it.
The trick is remembering that the new zImage has the device tree appended. It may be better to rename the file so you know the device tree has been appended. Also, don't forget this has to be done after every kernel build or device tree compile.
Jason Andrews
Related Resources:


A SystemC TLM 2.0 ARM Linux Boot Loader

Earlier this year I wrote an article with some details related to loading Linux into memory for Virtual Platform execution. I reviewed a problem related to Ubuntu on qemu for the ARM Versatile Platform.
At Cadence, we are strong believers in standards, and for Virtual Platforms one of the key standards is SystemC TLM 2.0.  Since more and more companies are adopting SystemC for Virtual Platform development I thought it might be useful for readers to look at the Linux loading process from a SystemC perspective.
I mentioned before that it is very convenient to load the kernel, file system, and kernel command line as separate items into memory and just start running. Today, I will continue with this approach. Perhaps in the future I will cover other approaches that involve combining all of these (plus a complete boot loader like u-boot) into a single file for loading into memory. Some Virtual Platforms also use SD card images that are read directly by a memory model.
I have posted a SystemC ARM Linux Loader model (.cpp and .h files) that is derived from both qemu source code and the OVP SmartLoaderARMLinux component. There is nothing much new about the model except that it can be easily used in a SystemC TLM 2.0 environment. Please note the model is for blogging purposes only, but it does work. Feel free to send any ideas or feedback on the code itself.
Review of the Boot Loader
For more information about the ARM Linux boot process I found the Booting ARM Linux article to be most useful.
Each of the following three items is loaded into memory at a pre-determined address:
  1. kernel command-line arguments
  2. kernel image
  3. file system
Here are the addresses where they are loaded (in the same order):
#define KERNEL_ARGS_ADDR 0x100
#define KERNEL_LOAD_ADDR 0x00010000
#define INITRD_LOAD_ADDR 0x00800000

The last component is the boot loader that is placed at the reset vector of the CPU (address 0). In this case, the boot loader is a small fragment of assembly code that performs the minimum before starting the kernel:
static uint32_t bootloader[] = {  0xe3a01000, /* mov r1, #0x?? */
  0xe3811c00, /* orr r1, r1, #0x??00 */
  0xe59f2000, /* ldr r2, [pc, #0] */
  0xe59ff000, /* ldr pc, [pc, #0] */
  0, /* Address of kernel args. Set by integratorcp_init. */
  0 /* Kernel entry point. Set by integratorcp_init. */
};

Some of the elements of the boot loader are filled in later as the kernel is being loaded by the method arm_load_kernel():
// Load bootcode
bootloader[1] |= info->board_id & 0xff;
bootloader[2] |= (info->board_id >> 8) & 0xff;
bootloader[5] = info->loader_start + KERNEL_ARGS_ADDR;
bootloader[6] = entry;

Now, let's look into some of the SystemC aspects of the loader and how to get all the parts into memory.
TLM 2.0 Initiator Socket
The loader model uses a TLM 2.0 initiator socket to write data into memory. The socket is defined in the .h file:
tlm_utils::simple_initiator_socket<ARMLinuxLoader>  isocket;
In a Virtual Platform there are a number of models connected by a memory mapped bus. Some models have initiator (master) interfaces, some have target interfaces (slave), and some have both. A CPU will have one or more initiator interfaces. Peripherals such UARTs or timers will have target interfaces, and some models like DMA controllers will have both initiator and target interfaces.
Designs using TLM 2.0 commonly have a router to route all of the transactions coming from initiators to the correct target based on the memory map of the design.
To use the loader model in SystemC, bind the initiator socket to a target socket to make the connection to the rest of the design. Here is an example of binding the loader initiator socket to a target socket on a multiplexer. The multiplexer then connects into a SystemC TLM 2.0 router (not shown here).
loader->isocket(*multiplexer1->tsocket[1]);
Loading Memory
Since the loader only needs to write memory there is no need to consider reads. Loading data into memory is done using the TLM 2.0 transport_dbg() interface. This interface is meant for non-intrusive, non-time consuming memory accesses.
When the generic part of the code decides to write a block of memory it uses the write_memory() method passing the start address, length, and pointer to the data.
Now, the SystemC specific part starts. A TLM 2.0 transaction is configured for a write, and the address and other values of the transaction payload are set. Finally, the transport_dbg() method of the initiator socket is called. The beauty of TLM 2.0 is that we don't need to know anything about where in the system the memory is located or how it is modeled. The router will automatically take care of making sure the memory data gets written to the right model.
Here are the details of the method used to write the data into memory:
ARMLinuxLoader::write_memory(uint32_t address, int length, unsigned char *data)
{
    tlm::tlm_generic_payload  trans;
    trans.set_write();
    trans.set_address(address);
    trans.set_data_length(length);
    trans.set_streaming_width(length);
    trans.set_data_ptr((unsigned char *) data);
    trans.set_byte_enable_ptr((unsigned char *) NULL);
    trans.set_byte_enable_length(0);
    isocket->transport_dbg(trans);
}

Data Transfer Size
The BYTES_PER_ACCESS define in the model header file determines the maximum size of each write transaction. It's set at only 128, but can be easily increased to make each transaction length larger and reduce the number of transport_dbg() calls.
Setting parameters at instantiation
The Linux kernel takes a wide assortment of parameters at run time. You can find their description in any kernel source tree in Documentation/kernel-parameters.txt.
You must pass the parameters to the kernel by putting them at address 0x100, as shown above. The loader model puts the parameters in the correct area by providing a constructor with a string that serves as the kernel parameters.
One example that can be easily demonstrated using a virtual platform is the Linux console. In many systems, the default kernel console is the LCD, but can be changed to use a UART instead.
By adding console=ttyAMA0 to the kernel command line, the kernel uses the first UART as the console for printing messages during the boot.
The instantiation of the loader would look like this:
ARMLinuxLoader *loader = new ARMLinuxLoader("loader", true, "Image/zImage","Image/arm_root2.img", "console=ttyAMA0");
Setting parameters from command line
Another way to specify the parameters to the loader is using command line arguments. SystemC provides access to the regular C type argv/argc arguments via sc_argc() and sc_argv(). Below is the code in the loader that processes the command line:
    int    sc_argc_c = sc_argc();
    char **sc_argv_c = (char **) sc_argv();
    int    i;

    // +systemc_args+"-kernel <file> -initrd <file> -append <command_string>"
    // <command_string> is command line args to be passed to kernel.
   
for (i = 1; i < sc_argc_c; i++) {
        if ((!strcmp((char *) sc_argv_c[i], "-kernel")) && (sc_argc_c >= i+1)) {
            kernelfile = sc_argv_c[i+1];
        }
        else if ((!strcmp((char *) sc_argv_c[i], "-initrd")) && (sc_argc_c >= i+1)) {
            initrdfile = sc_argv_c[i+1];
        }
        else if ((!strcmp((char *) sc_argv_c[i], "-append")) && (sc_argc_c >= i+1)) {
            commandString = sc_argv_c[i+1];
        }
    }
 

For Cadence SystemC simulators the arguments are passed using +systemc_args+ as shown in the comment in the code fragment above.
To confirm the loader has done its job the memory data can be viewed using a SystemC memory viewer or using a software debugger to examine memory data.
The screenshot below shows a memory view from a software debugger. The data starting from address 0 maps directly to the bootloader[] array.


The next screenshot shows the memory contents at address 0x100, the area with the Linux kernel command line arguments.  The parameters start with the various tags as initialized in the set_kernel_args() method of the loader model.


Then the user argument console=ttyAMA0 finally starts at address 0x13c as shown when the memory viewer is changed to display ASCII instead of hex values.

Hopefully, this introduction to the Linux loading process is helpful for users who need to run Linux on a SystemC Virtual Platform. Loading Linux is a bit more complex than just compiling a program and loading it using a debugger command, and there are many ways to do it, but once you get it set up it works great.



How to Cross Compile the Linux Kernel with Device Tree Support


SONY DSC
This article is intended for those who would like to experiment with the many embedded boards in the market but do not have access to them for one reason or the other. With the QEMU emulator, DIY enthusiasts can experiment to their heart’s content
You may have heard of the many embedded target boards available today, like the BeagleBoard, Raspberry Pi, BeagleBone, PandaBoard, Cubieboard, Wandboard, etc. But once you decide to start development for them, the right hardware with all the peripherals may not be available. The solution to starting development on embedded Linux for ARM is by emulating hardware with QEMU, which can be done easily without the need for any hardware. There are no risks involved, too.
QEMU is an open source emulator that can emulate the execution of a whole machine with a full-fledged OS running. QEMU supports various architectures, CPUs and target boards. To start with, let’s emulate the Versatile Express Board as a reference, since it is simple and well supported by recent kernel versions. This board comes with the Cortex-A9 (ARMv7) based CPU.
In this article, I would like to mention the process of cross compiling the Linux kernel for ARM architecture with device tree support. It is focused on covering the entire process of working—from boot loader to file system with SD card support. As this process is almost similar to working with most target boards, you can apply these techniques on other boards too.
Device tree
Flattened Device Tree (FDT) is a data structure that describes hardware initiatives from open firmware. The device tree perspective kernel no longer contains the hardware description, which is located in a separate binary called the device tree blob (dtb) file. So, one compiled kernel can support various hardware configurations within a wider architecture family. For example, the same kernel built for the OMAP family can work with various targets like the BeagleBoard, BeagleBone, PandaBoard, etc, with dtb files. The boot loader should be customised to support this as two binaries-kernel image and the dtb file – are to be loaded in memory. The boot loader passes hardware descriptions to the kernel in the form of dtb files. Recent kernel versions come with a built-in device tree compiler, which can generate all dtb files related to the selected architecture family from device tree source (dts) files. Using the device tree for ARM has become mandatory for all new SOCs, with support from recent kernel versions.
Building QEMU from sources
You may obtain pre-built QEMU binaries from your distro repositories or build QEMU from sources, as follows. Download the recent stable version of QEMU, say qemu-2.0.tar.bz2, extract and build it:
tar -zxvf qemu-2.0.tar.bz2
cd qemu-2.0
./configure --target-list=arm-softmmu, arm-linux-user --prefix=/opt/qemu-arm
make
make install
You will observe commands like qemu-arm, qemu-system-arm, qemu-img under /opt/qemu-arm/bin.
Among these, qemu-system-arm is useful to emulate the whole system with OS support.
Preparing an image for the SD card
QEMU can emulate an image file as storage media in the form of the SD card, flash memory, hard disk or CD drive. Let’s create an image file using qemu-img in raw format and create a FAT file system in that, as follows. This image file acts like a physical SD card for the actual target board:
qemu-img create -f raw sdcard.img 128M
#optionally you may create partition table in this image #using tools like sfdisk, parted
mkfs.vfat sdcard.img
#mount this image under some directory and copy required files
mkdir /mnt/sdcard
mount -o loop,rw,sync sdcard.img /mnt/sdcard
Setting up the toolchain
We need a toolchain, which is a collection of various cross development tools to build components for the target platform. Getting a toolchain for your Linux kernel is always tricky, so until you are comfortable with the process please use tested versions only. I have tested with pre-built toolchains from the Linaro organisation, which can be got from the following link http://releases.linaro.org/14.0.4/components/toolchain/binaries/gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux.tar.xz or any latest stable version. Next, set the path for cross tools under this toolchain, as follows:
tar -xvf gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux.tar.xz -C /opt
export PATH=/opt/gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux/bin:$PATH
You will notice various tools like gcc, ld, etc, under /opt/gcc-linaro-arm-linux-gnueabihf-4.8-2014.04_linux/bin with the prefix arm-linux-gnueabihf-
Building mkimage
The mkimage command is used to create images for use with the u-boot boot loader.
Here, we’ll use this tool to transform the kernel image to be used with u-boot. Since this tool is available only through u-boot, we need to go for a quick build of this boot loader to generate mkimage. Download a recent stable version of u-boot (tested on u-boot-2014.04.tar.bz2) from ftp.denx.de/pub/u-boot:
tar -jxvf u-boot-2014.04.tar.bz2
cd u-boot-2014.04
make tools-only
Now, copy mkimage from the tools directory to any directory under the standard path (like /usr/local/bin) as a super user, or set the path to the tools directory each time, before the kernel build.
Building the Linux kernel
Download the most recent stable version of the kernel source from kernel.org (tested with linux-3.14.10.tar.xz):
tar -xvf linux-3.14.10.tar.gz
cd linux-3.14.10
make mrproper #clean all built files and configuration files
make ARCH=arm vexpress_defconfig #default configuration for given board
make ARCH=arm menuconfig #customize the configuration
Figure1
Figure 1: Kernel configuration-main menu
Then, to customise kernel configuration (Figure 1), follow the steps listed below:
1) Set a personalised string, say ‘-osfy-fdt’, as the local version of the kernel under general setup.
2) Ensure that ARM EABI and old ABI compatibility are enabled under kernel features.
3) Under device drivers–> block devices, enable RAM disk support for initrd usage as static module, and increase default size to 65536 (64MB).
You can use arrow keys to navigate between various options and space bar to select among various states (blank, m or *)
4) Make sure devtmpfs is enabled under the Device Drivers and Generic Driver options.
Now, let’s go ahead with building the kernel, as follows:
#generate kernel image as zImage and necessary dtb files
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- zImage dtbs
#transform zImage to use with u-boot
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- uImage \ LOADADDR=0x60008000
#copy necessary files to sdcard
cp arch/arm/boot/zImage /mnt/sdcard
cp arch/arm/boot/uImage /mnt/sdcard
cp arch/arm/boot/dts/*.dtb /mnt/sdcard
#Build dynamic modules and copy to suitable destination
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- modules
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- modules_install \ INSTALL_MODPATH=<mount point of rootfs>
You may skip the last two steps for the moment, as the given configuration steps avoid dynamic modules. All the necessary modules are configured as static.
Figure2
Figure 2: Kernel configuration-RAM disk support
Getting rootfs
We require a file system to work with the kernel we’ve built. Download the pre-built rootfs image to test with QEMU from the following link: http://downloads.yoctoproject.org/releases/yocto/yocto-1.5.2/machines/qemu/qemuarm/core-image-minimal-qemuarm.ext3 and copy it to the SD card (/mnt/image) by renaming it as rootfs.img for easy usage. You may obtain the rootfs image from some other repository or build it from sources using Busybox.
Your first try
Let’s boot this kernel image (zImage) directly without u-boot, as follows:
export PATH=/opt/qemu-arm/bin:$PATH
qemu-system-arm -M vexpress-a9 -m 1024 -serial stdio \
-kernel /mnt/sdcard/zImage \
-dtb /mnt/sdcard/vexpress-v2p-ca9.dtb \
-initrd /mnt/sdcard/rootfs.img -append “root=/dev/ram0 console=ttyAMA0”
In the above command, we are treating rootfs as ‘initrd image’, which is fine when rootfs is of a small size. You can connect larger file systems in the form of a hard disk or SD card. Let’s try out rootfs through an SD card:
qemu-system-arm -M vexpress-a9 -m 1024 -serial stdio \
-kernel /mnt/sdcard/zImage \
-dtb /mnt/sdcard/vexpress-v2p-ca9.dtb \
-sd /mnt/sdcard/rootfs.img -append “root=/dev/mmcblk0 console=ttyAMA0”
In case the sdcard/image file holds a valid partition table, we need to refer to the individual partitions like /dev/mmcblk0p1, /dev/mmcblk0p2, etc. Since the current image file is not partitioned, we can refer to it by the device file name /dev/mmcblk0.
Building u-boot
Switch back to the u-boot directory (u-boot-2014.04), build u-boot as follows and copy it to the SD card:
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf- vexpress_ca9x4_config
make ARCH=arm CROSS_COMPILE=arm-linux-gnueabihf-
cp u-boot /mnt/image
# you can go for a quick test of generated u-boot as follows
qemu-system-arm -M vexpress-a9 -kernel /mnt/sdcard/u-boot -serial stdio
Let’s ignore errors such as ‘u-boot couldn’t locate kernel image’ or any other suitable files.
Figure3
Figure 3: U-boot loading
Figure4
Figure 4: Loading of kernel with FDT support
The final steps
Let’s boot the system with u-boot using an image file such as SD card, and make sure the QEMU PATH is not disturbed.
Unmount the SD card image and then boot using QEMU.
umount /mnt/sdcard
qemu-system-arm -M vexpress-a9 -sd sdcard.img -m 1024 -serial stdio -kernel u-boot
You can stop autoboot by hitting any key within the time limit and enter the following commands at the u-boot prompt to load rootfs.img, uimage, dtb files from the SD card to suitable memory locations without overlapping. Also, set the kernel boot parameters using setenv as shown below (here, 0x82000000 stands for the location of the loaded rootfs image and 8388608 is the size of the rootfs image).
Note: The following commands are internal to u-boot and must be entered within the u-boot prompt.
fatls mmc 0:0 #list out partition contents
fatload mmc 0:0 0x82000000 rootfs.img # note down the size of image being loaded
fatload mmc 0:0 0x80200000 uImage
fatload mmc 0:0 0x80100000 vexpress-v2p-ca9.dtb
setenv bootargs 'console=ttyAMA0 root=/dev/ram0 rw initrd=0x82000000,8388608'
bootm 0x80200000 - 0x80100000
Ensure a space before and after the ‘–’‘–’ symbol in the above command.
Log in using ‘root’ as the username and a blank password to play around with the system.
I hope this article proves useful for bootstrapping with embedded Linux and for teaching the concepts when there is no hardware available.
Acknowledgements
I thank Babu Krishnamurthy, a freelance trainer for his valuable inputs on embedded Linux and omap hardware during the course of my embedded journey. I am also grateful to C-DAC for the good support I’ve received.
References
[1] elinux.org/Qemu
[2] Device Tree for Dummies by Thomas Petazzoni (free-electrons.com)
[3] Few inputs taken from en.wikipedia.org/wiki/Device_tree
[4] mkimage man page from u-boot documentation

How to merge Kernel+rootfs_dts to one binary file