Very interesting ideas.
Really helpful. I had notes somewhere, but basically it's the difference between 3D printing a bicycle once and saying “let's call this a wheel, this a fork, this a frame, this a handlebar, etc”, and test them all separately. Then integrate them together. Swap parts out easily, etc. All that standard stuff.
Makes it a lot easier to use the code when it's not a 500-line long monstrosity
Interesting…
Doing TDD feels slower because you're constantly context switching between testing and dev, but you end up getting done faster because you spend less time debugging because you're always green.
Don't want fragile tests.
Also TDD doesn't miraculously make better code. Somehow mocks in ideal unit tests aren't necessary because coupling is super separate? Interesting. https://news.ycombinator.com/item?id=14661285
AFL with cmake: https://foxglovesecurity.com/2016/03/15/fuzzing-workflows-a-fuzz-job-from-start-to-finish/
https://nullprogram.com/blog/2019/01/25/
Linux talk recommends http://lcamtuf.coredump.cx/afl/ (google fuzz toolkit) as well as some classes: http://events.linuxfoundation.org/sites/events/files/slides/ELCdeck-final.pdf#page=78
Another google toolkit http://honggfuzz.com/
Choose item by sku 12345
Item price should be $7.00
Set quantity to 6
Shopping cart total should be $42.00 –TestObsessed
Should be easy to read for a non-programmer and written early on. Not sure if the code needs to read the text directly or not, but it'd be nice. Cucumber, Fitnesse.
In order to test at this level of abstraction for say, ipmctl, you have to think about it.
Unit tests are
100% is a bad metric. Good tests that eventually reach 100% coverage is much better.
Apparently they can instrument the code similar to bullseye to measure coverage.
Competitor is Simics, which will do coverage measurement, requires tracker for kernel.
Does it handle kernels? (virtual address space)
The way our VectorCast is set up, we can only write tests for a specific file. No interaction with any other files, so all of our integration tests are worthless.
Currently getting VectorCast working, I am saying “the expected output is the length of the message type”. However, if the length of the message changes according to the spec, then things down the line will break!
TEST.EXPECTED:foo:SOME_VARIABLE+1
←- only supports one variable at the endfoo
, which C asserts can doAs transition, OR the lines covered from Vc and bullseye Still need a testing framework. Validation black box / Glenn's tool is one good option for quick iteration + custom diag cmds. Copy what kernel does for coverage output from Bullseye? Currently duplicate cmake includes in VC so that VC can build correctly
You can check code coverage from tests on hardware?? But most programmers try and run on emulators…Best practice for unit testing (stackexchange)