being burned by C is part of the experience of developing software using C
-
being burned by C is part of the experience of developing software using C
-
Laurent Bercotreplied to Ariadne Conill 🐰:therian: last edited by
@ariadne some learn good practices from it, others never recover and fear cold water
-
Jack William Bellreplied to Ariadne Conill 🐰:therian: last edited by
Ah, but at least with C it's less likely you got burned by some hidden 'magic' not adequately explained in the language documentation.
However, with C it's much more likely you got burned because of something you didn't know about the computer architecture you are running the app on.
-
Ariadne Conill 🐰:therian:replied to Jack William Bell last edited by
@jackwilliambell I don't necessarily agree. the thought that C is "close to the hardware" is mostly nonsense, the hardware details are abstracted by the target platform support components (libc, kernel headers, crt, etc.)
-
Jack William Bellreplied to Ariadne Conill 🐰:therian: last edited by
You are forgetting about things like memory alignment and integer size. C mostly makes things work without you having to know about them for your particular hardware. And then one day you find out – the hard way – there are times it doesn't.
Source: Me, who has coded in C since 1985.
ETA: Nope. Since 1984!
-
Ariadne Conill 🐰:therian:replied to Jack William Bell last edited by
@jackwilliambell i've been doing security engineering since the early 2000s. i have written a LOT of C code. yes, pointer alignment is an issue, but mostly if you're coming from an x86 world where there is basically no enforcement on pointer alignment.
-
Jack William Bellreplied to Ariadne Conill 🐰:therian: last edited by
Ever wrote a complex struct out to a file and then read it back in as raw bytes?
It works. Until it doesn't. Things like alignment and offsets matter. And still do, especially if you are writing code for embedded systems across a wide variety of hardware platforms.
C provides you tools for this stuff, but it doesn't make you use them. And if you've only written code for X86 or ARM and never ported to a different architecture or wrote low-level OS or driver code you won't need to.
-
Jack William Bellreplied to Jack William Bell last edited by
Look into what it takes to latch a 24 bit hardware register and write a 32 bit integer to it.
ETA: I forgot to mention endianess. But that is a problem in most other languages too and many programmers are already aware of it. But I also forgot to mention how the C compiler optimizes differently for different targets and how those optimizations sometimes break things all by their lonesome.
-
Ariadne Conill 🐰:therian:replied to Jack William Bell last edited by
@jackwilliambell DMA buffers should always be considered an opaque set of bytes. so if you can only write 24 bits at a time, you write the lowest 3 bytes, and then move the latch to a new memory address and write the highest byte.
there is endian.h for byteswapping, but it's good software engineering practice to have an internal endianness and always be converting in and out of that representation when reading/writing from foreign I/O ports, DMAs, etc.