1.. SPDX-License-Identifier: GPL-2.0+ 2.. Copyright 2021 Google LLC 3.. sectionauthor:: Simon Glass <sjg@chromium.org> 4 5Writing Tests 6============= 7 8This describes how to write tests in U-Boot and describes the possible options. 9 10Test types 11---------- 12 13There are two basic types of test in U-Boot: 14 15 - Python tests, in test/py/tests 16 - C tests, in test/ and its subdirectories 17 18(there are also UEFI tests in lib/efi_selftest/ not considered here.) 19 20Python tests talk to U-Boot via the command line. They support both sandbox and 21real hardware. They typically do not require building test code into U-Boot 22itself. They are fairly slow to run, due to the command-line interface and there 23being two separate processes. Python tests are fairly easy to write. They can 24be a little tricky to debug sometimes due to the voluminous output of pytest. 25 26C tests are written directly in U-Boot. While they can be used on boards, they 27are more commonly used with sandbox, as they obviously add to U-Boot code size. 28C tests are easy to write so long as the required facilities exist. Where they 29do not it can involve refactoring or adding new features to sandbox. They are 30fast to run and easy to debug. 31 32Regardless of which test type is used, all tests are collected and run by the 33pytest framework, so there is typically no need to run them separately. This 34means that C tests can be used when it makes sense, and Python tests when it 35doesn't. 36 37 38This table shows how to decide whether to write a C or Python test: 39 40===================== =========================== ============================= 41Attribute C test Python test 42===================== =========================== ============================= 43Fast to run? Yes No (two separate processes) 44Easy to write? Yes, if required test Yes 45 features exist in sandbox 46 or the target system 47Needs code in U-Boot? Yes No, provided the test can be 48 executed and the result 49 determined using the command 50 line 51Easy to debug? Yes No, since access to the U-Boot 52 state is not available and the 53 amount of output can 54 sometimes require a bit of 55 digging 56Can use gdb? Yes, directly Yes, with --gdbserver 57Can run on boards? Some can, but only if Some 58 compiled in and not 59 dependent on sandboxau 60===================== =========================== ============================= 61 62 63Python or C 64----------- 65 66Typically in U-Boot we encourage C test using sandbox for all features. This 67allows fast testing, easy development and allows contributors to make changes 68without needing dozens of boards to test with. 69 70When a test requires setup or interaction with the running host (such as to 71generate images and then running U-Boot to check that they can be loaded), or 72cannot be run on sandbox, Python tests should be used. These should typically 73NOT rely on running with sandbox, but instead should function correctly on any 74board supported by U-Boot. 75 76 77How slow are Python tests? 78-------------------------- 79 80Under the hood, when running on sandbox, Python tests work by starting a sandbox 81test and connecting to it via a pipe. Each interaction with the U-Boot process 82requires at least a context switch to handle the pipe interaction. The test 83sends a command to U-Boot, which then reacts and shows some output, then the 84test sees that and continues. Of course on real hardware, communications delays 85(e.g. with a serial console) make this slower. 86 87For comparison, consider a test that checks the 'md' (memory dump). All times 88below are approximate, as measured on an AMD 2950X system. Here is is the test 89in Python:: 90 91 @pytest.mark.buildconfigspec('cmd_memory') 92 def test_md(u_boot_console): 93 """Test that md reads memory as expected, and that memory can be modified 94 using the mw command.""" 95 96 ram_base = u_boot_utils.find_ram_base(u_boot_console) 97 addr = '%08x' % ram_base 98 val = 'a5f09876' 99 expected_response = addr + ': ' + val 100 u_boot_console.run_command('mw ' + addr + ' 0 10') 101 response = u_boot_console.run_command('md ' + addr + ' 10') 102 assert(not (expected_response in response)) 103 u_boot_console.run_command('mw ' + addr + ' ' + val) 104 response = u_boot_console.run_command('md ' + addr + ' 10') 105 assert(expected_response in response) 106 107This runs a few commands and checks the output. Note that it runs a command, 108waits for the response and then checks it agains what is expected. If run by 109itself it takes around 800ms, including test collection. For 1000 runs it takes 11019 seconds, or 19ms per run. Of course 1000 runs it not that useful since we 111only want to run it once. 112 113There is no exactly equivalent C test, but here is a similar one that tests 'ms' 114(memory search):: 115 116 /* Test 'ms' command with bytes */ 117 static int mem_test_ms_b(struct unit_test_state *uts) 118 { 119 u8 *buf; 120 121 buf = map_sysmem(0, BUF_SIZE + 1); 122 memset(buf, '\0', BUF_SIZE); 123 buf[0x0] = 0x12; 124 buf[0x31] = 0x12; 125 buf[0xff] = 0x12; 126 buf[0x100] = 0x12; 127 ut_assertok(console_record_reset_enable()); 128 run_command("ms.b 1 ff 12", 0); 129 ut_assert_nextline("00000030: 00 12 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................"); 130 ut_assert_nextline("--"); 131 ut_assert_nextline("000000f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 12 ................"); 132 ut_assert_nextline("2 matches"); 133 ut_assert_console_end(); 134 135 ut_asserteq(2, env_get_hex("memmatches", 0)); 136 ut_asserteq(0xff, env_get_hex("memaddr", 0)); 137 ut_asserteq(0xfe, env_get_hex("mempos", 0)); 138 139 unmap_sysmem(buf); 140 141 return 0; 142 } 143 MEM_TEST(mem_test_ms_b, UT_TESTF_CONSOLE_REC); 144 145This runs the command directly in U-Boot, then checks the console output, also 146directly in U-Boot. If run by itself this takes 100ms. For 1000 runs it takes 147660ms, or 0.66ms per run. 148 149So overall running a C test is perhaps 8 times faster individually and the 150interactions are perhaps 25 times faster. 151 152It should also be noted that the C test is fairly easy to debug. You can set a 153breakpoint on do_mem_search(), which is what implements the 'ms' command, 154single step to see what might be wrong, etc. That is also possible with the 155pytest, but requires two terminals and --gdbserver. 156 157 158Why does speed matter? 159---------------------- 160 161Many development activities rely on running tests: 162 163 - 'git bisect run make qcheck' can be used to find a failing commit 164 - test-driven development relies on quick iteration of build/test 165 - U-Boot's continuous integration (CI) systems make use of tests. Running 166 all sandbox tests typically takes 90 seconds and running each qemu test 167 takes about 30 seconds. This is currently dwarfed by the time taken to 168 build all boards 169 170As U-Boot continues to grow its feature set, fast and reliable tests are a 171critical factor factor in developer productivity and happiness. 172 173 174Writing C tests 175--------------- 176 177C tests are arranged into suites which are typically executed by the 'ut' 178command. Each suite is in its own file. This section describes how to accomplish 179some common test tasks. 180 181(there are also UEFI C tests in lib/efi_selftest/ not considered here.) 182 183Add a new driver model test 184~~~~~~~~~~~~~~~~~~~~~~~~~~~ 185 186Use this when adding a test for a new or existing uclass, adding new operations 187or features to a uclass, adding new ofnode or dev_read_() functions, or anything 188else related to driver model. 189 190Find a suitable place for your test, perhaps near other test functions in 191existing code, or in a new file. Each uclass should have its own test file. 192 193Declare the test with:: 194 195 /* Test that ... */ 196 static int dm_test_uclassname_what(struct unit_test_state *uts) 197 { 198 /* test code here */ 199 200 return 0; 201 } 202 DM_TEST(dm_test_uclassname_what, UT_TESTF_SCAN_FDT); 203 204Replace 'uclassname' with the name of your uclass, if applicable. Replace 'what' 205with what you are testing. 206 207The flags for DM_TEST() are defined in test/test.h and you typically want 208UT_TESTF_SCAN_FDT so that the devicetree is scanned and all devices are bound 209and ready for use. The DM_TEST macro adds UT_TESTF_DM automatically so that 210the test runner knows it is a driver model test. 211 212Driver model tests are special in that the entire driver model state is 213recreated anew for each test. This ensures that if a previous test deletes a 214device, for example, it does not affect subsequent tests. Driver model tests 215also run both with livetree and flattree, to ensure that both devicetree 216implementations work as expected. 217 218Example commit: c48cb7ebfb4 ("sandbox: add ADC unit tests") [1] 219 220[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/c48cb7ebfb4 221 222 223Add a C test to an existing suite 224~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 225 226Use this when you are adding to or modifying an existing feature outside driver 227model. An example is bloblist. 228 229Add a new function in the same file as the rest of the suite and register it 230with the suite. For example, to add a new mem_search test:: 231 232 /* Test 'ms' command with 32-bit values */ 233 static int mem_test_ms_new_thing(struct unit_test_state *uts) 234 { 235 /* test code here*/ 236 237 return 0; 238 } 239 MEM_TEST(mem_test_ms_new_thing, UT_TESTF_CONSOLE_REC); 240 241Note that the MEM_TEST() macros is defined at the top of the file. 242 243Example commit: 9fe064646d2 ("bloblist: Support relocating to a larger space") [1] 244 245[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/9fe064646d2 246 247 248Add a new test suite 249~~~~~~~~~~~~~~~~~~~~ 250 251Each suite should focus on one feature or subsystem, so if you are writing a 252new one of those, you should add a new suite. 253 254Create a new file in test/ or a subdirectory and define a macro to register the 255suite. For example:: 256 257 #include <common.h> 258 #include <console.h> 259 #include <mapmem.h> 260 #include <dm/test.h> 261 #include <test/ut.h> 262 263 /* Declare a new wibble test */ 264 #define WIBBLE_TEST(_name, _flags) UNIT_TEST(_name, _flags, wibble_test) 265 266 /* Tetss go here */ 267 268 /* At the bottom of the file: */ 269 270 int do_ut_wibble(struct cmd_tbl *cmdtp, int flag, int argc, char *const argv[]) 271 { 272 struct unit_test *tests = UNIT_TEST_SUITE_START(wibble_test); 273 const int n_ents = UNIT_TEST_SUITE_COUNT(wibble_test); 274 275 return cmd_ut_category("cmd_wibble", "wibble_test_", tests, n_ents, argc, argv); 276 } 277 278Then add new tests to it as above. 279 280Register this new suite in test/cmd_ut.c by adding to cmd_ut_sub[]:: 281 282 /* Within cmd_ut_sub[]... */ 283 284 U_BOOT_CMD_MKENT(wibble, CONFIG_SYS_MAXARGS, 1, do_ut_wibble, "", ""), 285 286and adding new help to ut_help_text[]:: 287 288 "ut wibble - Test the wibble feature\n" 289 290If your feature is conditional on a particular Kconfig, then you can use #ifdef 291to control that. 292 293Finally, add the test to the build by adding to the Makefile in the same 294directory:: 295 296 obj-$(CONFIG_$(SPL_)CMDLINE) += wibble.o 297 298Note that CMDLINE is never enabled in SPL, so this test will only be present in 299U-Boot proper. See below for how to do SPL tests. 300 301As before, you can add an extra Kconfig check if needed:: 302 303 ifneq ($(CONFIG_$(SPL_)WIBBLE),) 304 obj-$(CONFIG_$(SPL_)CMDLINE) += wibble.o 305 endif 306 307 308Example commit: 919e7a8fb64 ("test: Add a simple test for bloblist") [1] 309 310[1] https://gitlab.denx.de/u-boot/u-boot/-/commit/919e7a8fb64 311 312 313Making the test run from pytest 314~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 315 316All C tests must run from pytest. Typically this is automatic, since pytest 317scans the U-Boot executable for available tests to run. So long as you have a 318'ut' subcommand for your test suite, it will run. The same applies for driver 319model tests since they use the 'ut dm' subcommand. 320 321See test/py/tests/test_ut.py for how unit tests are run. 322 323 324Add a C test for SPL 325~~~~~~~~~~~~~~~~~~~~ 326 327Note: C tests are only available for sandbox_spl at present. There is currently 328no mechanism in other boards to existing SPL tests even if they are built into 329the image. 330 331SPL tests cannot be run from the 'ut' command since there are no commands 332available in SPL. Instead, sandbox (only) calls ut_run_list() on start-up, when 333the -u flag is given. This runs the available unit tests, no matter what suite 334they are in. 335 336To create a new SPL test, follow the same rules as above, either adding to an 337existing suite or creating a new one. 338 339An example SPL test is spl_test_load(). 340 341 342Writing Python tests 343-------------------- 344 345See :doc:`py_testing` for brief notes how to write Python tests. You 346should be able to use the existing tests in test/py/tests as examples. 347