The Littlest Uart and Other Interface Style Fables

by Trenton Henry

04/10/20


Note - this is an excerpt from a much larger document about the general topic of "how to write firmware". This section is about interface style, so it might be a little out of context. The basic point was that how you define your interfaces has a huge impact on how your clients have to write their code to use it; so you should do it right.


Interface Style - This is one of the most important things to think about before writing any code. Your interfaces directly determine how other code must be written if it is to employ your modules. If you create baroque, clunky, high ceremony interfaces then your clients are going to have to write baroque, clunky, high ceremony application code. And they will not be happy with you, unless they are idiots, which they probably aren't. I cannot tell you precisely what style of interface is right for your specific project, so you are going to have to figure that out for yourself. But I can attempt to illustrate a small selection of interface styles.


Small - The Story of the Littlest UART

Once upon a time there was a very little UART that was obsessed with optimality and code footprint. It was called U0, and was all about high speed, and low drag. Not surprisingly, U0 didn't have much to say. Its entire vocabulary consisted of a few simple phrases:


enable(baud, stop, parity)

To remain fit and trim, U0 was adamant that 8 bit words were all that matter, and that hardware handshake was unnecessary. U0 was quite opinionated, and frustrated that the world did not settle on one value for stop and parity bits. So U0 was grudgingly forced to concede that for the sake of interoperability both needed to be configurable.

disable()

U0 could be disabled with a single request. Quite efficient. Saves power.

write(buf, len)

U0 was happy to transmit a buffer of bytes. It believed that transmitting these bytes was the most important thing that needed doing, so it expected everyone to wait until it finished transmission.

read(buf, len)

U0 was also happy to receive a buffer of bytes. It expected everyone to wait until it received exactly the number of bytes requested. Nothing is more important, so you need to wait. Possibly forever, if that many bytes fail to arrive. This was the only reasonable thing, thought U0. After all, if you didn't want that many bytes you shouldn't have asked for them.


U0 was quite pleased with itself, and enjoyed the adulation it received for its tiny code footprint and concise, no nonsense, interface. Until one day, there was another UART, that could say everything U0 could say, except that it staunchly refused to say "read". This new UART thought that the main thing to be concerned about was transmitting, and that receiving was non-essential. It became known as Umin, since it was clearly smaller than the no-longer-littlest UART, U0, and rapidly became the darling of a number of important and influential applications. Embittered, U0 emitted a dismissive XOFF and disabled itself in humiliation.


Medium - The Story of the Little UART that Could

This one time there was this UART, called U1, that was a relative of U0. U1 thought that U0 was stingy and unforgiving. So U1 thought to itself that maybe it could do something different, possibly making the world a better place for everyone. So U1 changed things up ever so slightly, some might even argue timidly, to see what would happen.


enable(baud, stop, parity, handshake)

U1 was generally in accord with U0 when it came to things like enabling and disabling. U1 didn't have much to add, other than hardware handshake. So long as you only wanted RTS/CTS. U1 wasn't confident that it could manage DTR/DSR.


disable()

U1 thought that U0 had this one just right, and added nothing.


write(buf, len, timeout)

U1 wondered if it might be useful to offer a timeout, in case the other end of the wire asserted flow control and never de-asserted it afterwards. So U1 returned the number of bytes actually transmitted. If there was a timeout then the returned count would be less than the specified count, so you'd know about it. U1 hoped that might be useful.


read(buf, len, timeout)

U1 worried that U0 could mindlessly wait forever for bytes that never arrived (though to be fair U0 did argue that it was in fact waiting mindfully). So receiving also had a timeout and returned the number of bytes actually received. Secretly, U1 was pretty proud of itself, but it was not in U1's nature to engage in one-upmanship.


At first none of the shiny UsefulApplications gave U1 any notice. They were mostly content with U0, and didn't see any need to change. But then a sexy new wireless tech called IrDA came along and transmission and reception errors became all too common. These UsefulApplications started to notice that they were waiting inordinate lengths of time to receive things that just never seemed to arrive. It simply wasn't proper, and they became frustrated. "Who can rectify this most egregious of situations?" they asked. U1 said "pick me, pick me!" but no one listened. "Harumph" said the big important applications. You can't do enough" they said. "We're looking for a big, strong UART that can handle our traffic in its sleep. You are just to wimpy." U1 felt ashamed and mostly kept to itself after that.


But then this one VeryImportantApplication wearing pin stripes got stuck waiting to receive the last byte of a buffer. Everything depended on that big and important application, and when it stopped responding, kernel-panic ensued. In desperation the men in white coats and hard hats searched everywhere for a mighty UART to rescue the BigImportantApplication and restore order. U1 was shy, and self conscious, and didn't want to embarrass itself again, so it stood quietly to the side. But a very important man, an actual TechLead, bumped into U1 and said, "Hey there chip, don't be shy. I'll bet that you could fix the VeryImportantApplication that everyone depends on. You may just have what it takes. What do you say, will you give it a go?" And U1 was all like, "Awww, shucks, I... I'm not sure... I suppose maybe I could try? If you think it would be Ok, I mean."


So U1 went to the VeryImportantApplication and tried to fix it. It was tough going at first, and U1 almost gave up. "But everyone is counting on me! I can't let them down!?!" thought U1. "I must do this. I simply must!". And slowly, ever so slowly, repeating "I think I can...", little U1 gave it's all. "I think I can... I think I can...", until suddenly, with a mighty burst of data, the VeryImportantApplication awoke with several off color remarks, and cries of "What's all this, then!?!"


"I KNEW I COULD!" hummed U1 to itself as the day was saved and everyone was happy, except for U0, who wasn't paying any attention. And that is how U1 realized that it was good enough, smart enough, and gosh darn, everyone like it. After that a lot of TerriblyUsefulApplications took notice of U1, and it became quite popular in many influential circles. In fact, U1 was so popular that even after some larger rivals arrived on the scene, it continued to enjoy tremendous success.


Large – The Story of Jabba the UART


It wasn't long after U0 became reclusive and U1 stepped into a leading role that other aspiring UARTs took notice. Each added new features, or new twists. For a time such UARTs were extremely common, and a wide variety of them could be found just about everywhere. It was the golden age of UARTs when everyone felt free to interface with everyone else, and to head to the beach in skimpy swimwear to show off their bauds. No one was terribly worried about code footprint or power consumption because it was a time of plenty. Happy UARTs flourished in this environment. But the winds of change are inevitable, and eventually a big new powerful UART came on the scene and started gobbling up all of the sockets that the smaller, complacent, UARTs had previously enjoyed filling. No one really saw it coming, and in no time at all Jabba16550 became the force to be reckoned with.


query(uart)

Jabba was arrogant, and often cloned itself, forcing you to specify which instance you were actually talking to. Jabba also liked for you to query for the capabilities implemented by each specific clone, reporting all manner of things, like can rx, can tx, can be half duplex, can be full duplex, can use RTS/CTS, can use DSR/DTR, and other things that any given clone may or may not support.


enable(uart, cfg, baud, bits, stop, parity, handshake)

And Jabba let you specify things like 5, 6, 7 bit sub-bytes, but only just to get a marketing checkbox. And of course you had to tell Jabba's clones exactly how you wanted them configured, since you asked for their capabilities after all.


disable(uart)

Honestly, no one ever really felt a need to do much about changing this request, other than specifying which instance you meant. Jabba toyed with options for things like 'assert flow control', and 'send a break first', etc, but no one ever used those features so they fell into disuse and were forgotten.


writable(uart)

This request let you ask Jabba if there was any room available for you to try to transmit something. Like, is your queue full or not?, Or, is your hardware fifo full? Inquiring minds wanted to know, after all. Because just trying to transmit and checking for a nil tag was too plebian. Additional operations were in vogue, and Jabba was there to meet demand.


write(uart, buf, len, timeout, callback, context)

Realizing that some folk actually had other things that needed doing, in spite of any transmitting or receiving that may be going on, Jabba decided to add callbacks with contexts to pass to them so that you could get on with other things while the transmitty / receivey things were under way. This had been done before, or course, so Jabba added more features. For example, it had a transmit queue so that applications could transmit multiple messages without waiting on earlier ones to finish. And the "write" request returned a tag value that could even be used to cancel the associated transmission. If the cancelled buffer was still in queue then no problem, but if it was already in flight, cancelling might not happen. In any case, you'd get a callback telling you when the buffer was finished, either transmitted successfully, timed out, or cancelled. But Jabba didn't stop there. For you see, a timeout value of zero meant 'no timeout', so you could wait forever if you wanted. (This was done mostly just to irritate U0, but it was pointless as U0 wasn't receiving at the time.) And a callback value of NULL generally meant 'just don't bother notifying me', but a number of cheap imitation knock-off UARTs wearing off-brand test harnesses treated it as 'just do it synchronously and don't return until it's done'. Which was very dumb, and caused so much confusion that eventually Jabba had them backspaced.


readable(uart)

Jabba added the ability to ask if it was possible to receive currently. Which usually meant "is there room in the receive queue?" but sometimes meant "is there data in the hardware fifo that I could get at if I called "read"? And all of the other transmit related accoutrements were applicable to receive as well. Jabba offered one-stop shopping.


read(uart, buf, len, timeout, callback, context)

Jabba also added a receive callback, a receive queue, tags for cancelling, etc. This was the reverse of write, essentially. All very orthogonal; very posh; spared no expense. After all, if you are going to receive anything at all you might as well do it in whatever style you wanted. Jabba catered to a discerning clientele.And it came to pass that Jabba had a very good run before things started to fall apart. After all, with such a schmorgasbord of features and fiddly bits why would you ever hire anyone but Jabba for the job? That, and the fact that if you went with a rival then Jabba's goon Guido was going to come give you a talking to, sooner or later.


This went on for a long time, but eventually things started to change. A young JEDART named USB stepped out of obscurity and swept across the marketscape, rapidly capturing a sizeable share of the TAM. This brash, idealistic, young USB was the Force that finally cut Jabba down to size_t, but it didn't finish Jabba off. That task fell to another bold new player, code-name "Mobile". And Mobile was hell bent on squeezing every last electron out of any battery it could lay terminals on. And that meant slower clocks, smaller memories, smaller code footprint, fewer pins; a return to the smaller, simpler, devices from the before-times. UARTS that had previosuly thrived when resources were plentiful quickly starved and were forgotten. It was a famine of unimaginable proportion; it serves no one to dwell on the grim details.


Almost over night Jabba's cartel collapsed, only to be (surprisingly quickly) replaced by new generations of smaller UARTs, with hip names like U2, FooART, BitBouys, GittenXOFF, RateThisBaud-Yo, and many others. Strangely, everyone seemed to think that these edgy new UARTs were somehow actually innovative. Really they were just posers, re-imagined imitations of their long forgotten predecessors, because anyone who fails to understand history ends up repeating it all over again, usually in weird, dumb, ways. In any case, all of this led to the slimmer more vegan friendly U1 making quite a dramatic comeback when Mobile came onto the scene.


But Mobile was, well, mobile, and always on the go, and eventually begot two very demanding offspring, IoT and Edge, neither of which was willing to burn calories on anything non-essential. (This is partly attributable to the GreatLockdown when anything non-essential was discarded in its entirety, such as, for example, politicians.) And it was this moody but dynamic duo, IoT and Edge, who were largely credited with enabling a modest comeback for our old friend U0. In truth, Umin also made something of a showing, until the UsefulApplications realized Umin refused to receive and switched back to U0, who proudly re-assumed the titular role of "Littlest UART", for which it had been destined since the start, excitedly squealing "You like me! You really, really like me!"


Of course, in the end it was U1 who moved into a position of stable dominance. This is largely due to the fact that U0, expecting to receive an award for being the Littlest UART, locked up permanently waiting for an accolade which never arrived. To this day it is U1 and its VeryCloseRelatives who service the majority of IoTandEdgeApplications, with a few upstart UARTs appearing now and again, though with somewhat limited success.


"Options are what are available to anyone who keeps an open mind and is willing to question the comfortable assumptions of established thinking" -- David Weber, War of Honor


All silliness aside, one of the points I was trying to illustrate is that you have choices, and those choices have consequences. You can choose to constrain your solution to only involve low footprint highly optimized operations, but you are going to have to omit features that are at odds with those constraints. Conversely, you can choose to add features that require more resources and are potentially more difficult to use if the benefit of doing so outweighs the cost of a larger memory footprint and reduced performance.


As an example, if you decide that all programmer errors are bugs, and that assertions are the correct methodology for handling them, then your functions may be able to get away without needing to return error codes. That has the result that the client code needn't bother with checking returned error codes, which reduces complexity and code footprint. But it does mean that you, and your clients, have to live with the fact that programmer errors will result in assertions (aka controlled crashes). Otherwise you need a different idiom and a different interface.


So if assertions are unacceptable, or the fact that they are compiled out of release code is problematic because your clients only use the released code so you cannot catch their errors with assertions, then you are likely to consider returning some sort of error codes. Now your clients must of necessity write code to check the return values of all of your interfaces. This is a higher burden on the client, increases the code footprint, and adds overhead. But if doing so allows detection of errors injected by the client programmers then it may well be worth the cost.


What this actually boils down to is a variant of the Sapir-Whorf Hypothesis, which essentially states that the structure of a language affects its speakers' world view or cognition, and thus people's perceptions are relative to their spoken language. Your interface is the language that you are forcing your clients to speak. Therefore, you are imposing a world view, or a manner of thinking about things, onto them. (Recall that I previously pointed you at the 'blub paradox', which is related.) Because of this it is incumbent upon you as a craftsman to provide a language that allows your clients to converse about their specific domain of interest in the most expressive manner possible, within the overarching requirements and constraints of the system.


Do not take this responsibility lightly! Your reputation as a craftsman depends upon your ability to define the language that your clients will use to communicate their intent, and to interpret responses from your code. If you fail at this then your reputation as a designer will be diminished and your clients will be far less likely to seek you out for future projects. Understand the problem domain. Understand how your clients want to talk about it. Understand the constraints of your system. Make it easy for them to say the things they want to say, and possible to say the things they aren't (yet) interested in saying. Add measured portions of that, with the proper amount of system constraint, a dash of wisdom, and stir until the lumps have been removed.


End