Monday, January 2, 2023

Your Bluetooth is bad, January 2023 edition

 Your Bluetooth protocol is bad, January 2023

I've finally gotten over a big hump in my Bluetooth Device Controller program -- I've been poking around with adding and fiddling with devices, and that means that the code has been getting much more "experimental". That's not a good thing for an app that I ship, and which has over 35 thousand downloads! I've been working hard to convert the experimental code into an app that people can use without too much frustration.

With that, it's on to the next installment of this series on crappy Bluetooth protocols, focused on the Govee line of air sensors. The version 1.10 app supports the Govee 5074; the next version (presumably 1.11) will support the 5075 and 5106. All of them suffer from the same three flaws, and the 5106 has a unique and fun new flaw.

Don't shut off communications too early. All of the Govee devices like to shut down their Bluetooth connections really fast -- after about 4 seconds, they shut down the connection even if you've been talking on it. Other devices will wait until the connection has no traffic before shutting down.

Just transmit your freaking data. If you provide data, just provide it: make a characteristic, and make it readable and notifiable. 

Don't fold multiple values into decimal values. This is harder to explain. The Govee Air Sensor, as an example, sends out temperature, humidity, and air quality data in a single advertisement. But instead of just filling in 3 two-byte integer values, they instead take the temperature and multiple by 1_000_000. Then then take the humidity and multiple by 1_000. Then they add the air quality. This is all written as a single 4-byte integer.

To decode this monstrosity, you have to read in the 4-byte unsigned integer (in big-endian mode, even though Bluetooth is mostly little-endian). Then do a weird combination of MOD and integer divide operations to split out the three numbers.

Use the right Manufacturer code. The Govee devices mostly use a made-up EC88 manufacturer code; this is an unassigned value that nobody should be using. But the 5106 Air Quality monitor, for no apparent reason, uses the Nokia Phone code (they are manufacturer #1).

FYI: Common Timeout connection parameters

Each Bluetooth LE device can provide a set of connection parameters. These are decoded (now) by the Bluetooth Device Controller; they are part of the "Connection Parameters" (2A04) characteristic of the "Common Configuration" service (1800). The timeout is the last two bytes in little-endian format. For example, if the last two bytes are "90 01" in hex, that's 0190(hex) which is 400 (decimal). The value is in 10s of milliseconds, so the 400 (decimal) means 4 seconds for a timeout.

Looking at my device library, common settings here are:

  • 100 ms used by the SensorBug
  • 175 ms used by the Sphero
  • 4 sec used by the microbit, the govee, the kano coding wand, the viatom, the vion, and skoobot, smartibot and espruino
  • 5 sec used by the gems activity tracker
  • 6 sec used by the Mipow and the sense peanut
  • 10 sec used by the inkbird, lionel, the pyle, the powerup, the various sensor tags, and the dotti


Tuesday, September 6, 2022

Clipboard data for Excel

 I recently wrote a new app for Windows, the Simple Wi-Fi Analyzer. It will scan for nearby Wi-Fi hotspots and present you will lots of information about each one. But that's not what this post is about. This post is about how to put text on the clipboard that Excel can read.

You might think that CSV is the answer. It's not; Excel will treat it badly. An Excel expert will know to do a Paste Special and then laboriously tell Excel all about the delimiters, but that's  terrible solution.

The right solution is to use HTML. You can put text onto the clipboard as text, but encoded as HTML. In my new app, the HTML on the clipboard looks like this:

<html>

<body>

<table><tr><td>WiFiSsid</td><td>Bssid</td><td>BeaconInterval</td><td>Frequency</td><td>IsWiFiDirect</td><td>NetworkKind</td><td>Rssi</td><td>PhyKind</td><td>SignalBars</td><td>Uptime</td><td>AuthenticationType</td><td>EncryptionType</td></tr>

<tr><td>APName</td><td>1g:x2:9a:86:02</td><td>0.1024</td><td>5.745</td><td>False</td><td>Infrastructure</td><td>-72</td><td>Vht</td><td>4</td><td>21.13:42:21.4404040</td><td>RsnaPsk</td><td>Ccmp</td></tr>

</table>

</body>

</html>

And shazam! it pastes perfectly! Except that Excel seems to think that setting the column width is beneath its dignity, but whatever.


BTW: the <body> isn't needed. And you'll need to encode the strings with System.Net.Webutility.HtmlEncode?view=net-7.0. That method will escape all of the HTML characters (like "<") into their HTML-safe version (&LT;)









Sunday, March 20, 2022

Review: God and Golem, Inc (Norbert Weiner) -- 1964, MIT

TL/DR: I'm glad to have read the book but can't recommend it. The interesting ideas are now widely accepted (computers can learn, and we can't rely on computers to make decisions).

Best Quotes

"A goal-seeking mechanism will not necessarily seek our goals" (page 63)

"This is only one of the many places where human impotence has hitherto shielded us from the full destructive impact of human folly" (page 64)

"A digital computer can accomplish in a day a body of work that would have the full efforts of a team of [human] computers for a year, ..." (page 71). A modern 2022 computer can do the work of 40,000 people for a year in about a second (a Core I5 can do 34969 million FLOPS).

"Written for the intellectually alert public"

The book cover flaps are, unusually, one long continuous text that summarizes the text. The final paragraph: "... written for the intellectually alert public, does not require of the reader that [they] have a highly technical background." I suspect that this is the editor code for "all the glamor of a calculus textbook, but without the equations."

I originally picked up this book second hand as part of my overall interest in everything in the history of my computing profession. This is the first time I've managed to get all the way through while also grasping what the heck Norbert is trying to say. It helped that I put in lots of annotations and had access to the internet.

Theme: Computers will be like humans

If you accept that the Star Trek character "Data" is a "sentient being", then you already agree with Weiner. The entire book is trying to get us people to understand that eventually computers will have all of the parameters of sentient life.

The book was written in the early 60's (the publication date of 1964 is misleading; the book is a rewritten amalgam of earlier lectures), which is before "Star Trek" and sentient robots for the general public, but it's written long after Isaac Asimov's Robot series (including the books with Daneel Olivaw).

Weiner's basic thesis is that "computers" need to be considered in three ways: can a computer learn, can a computer reproduce, and what functions should be handled by humans and which by computer?

Can computers learn (spoiler: yes)

The "can computers learn" is now well understood: yes, they can. Weiner has a highly intelligence-is-everything point of view: in his opinion, as soon as a game is theoretically understood, it ceases to be of any interest at all to anyone. The obvious counter-example -- that people still play tic-tac-toc -- is entirely unconsidered.

This section, BTW, is what propelled me to write notes in the book. Weiner will bring up a person's name on one page, mess about for 15 pages, and then bring back that name assuming that you remember it.

Can computers make a new computer? (spoiler: eventually, yes)

The section on whether computers can duplicate themselves can only be understood by people who understand the complex dead-end mechanism used in WW2 artillery fire control systems. This is something Weiner excelled at, and he has great enthusiasm for it. But a better example is the numerically controlled machine tools that were already available -- a computer can guide the tools needed to build more computers.

The section is also somewhat weird. Biologists love to use "can reproduce themselves" as part of the important distinction between living and non-living. But from a legal or religious perspective, it's bunk: people don't have more or fewer rights because of their ability to reproduce.

What's the right place of computers? (helper, not decider)

Weiner correctly foreshadows the problems of having computers be the ultimate decider of critical actions, while also missing most of the problems that we're bedeviled with currently.

He's got a lot to say about nuclear war (fifty years later, we thankfully have never had another nuclear war, although arguably several wars have been highly influenced by the nuclear capabilities of the sides). He's rightfully skeptical of automated launch systems -- the reality of most alerts is that they are false alarms.

So, he says that computers will be like humans? (answer: no)

On the one hand, he's got a lot to say about how computers can theoretically learn, mutate, and reproduce. But he doesn't carry this to the logical conclusion: that computers will eventually be sentient (which he doesn't bring up at all). Instead, he argues that we humans must block any attempt to have computer make decisions that affect us humans. He's firmly in the camp that computers are good helpers for the human intellect but are ill-suited to being in control.

And right now, I'd say he's right. We see computers making "unbiased" decisions on health care that turn out to be racist (*), or "unbiased" justice decisions that put one set of people into jail. And we see clearly during these days of the Ukraine war that computerized messaging can be a tool to amplify one position or another.

If it's not physics, it's crap

Holy cow, there's an entire chapter devoting to bashing the mathematical formulations of anything that isn't physics. He's got a lot to say about how (for example) mathematical economics can't possibly ever be useful because getting good data is hard. What he misses is that we can deal with the data being wonky. During the pandemic times, we all saw the strange way that death rates would fluctuate, only to be explained that this state or that state was behind in their processing, and would periodically catch up by providing one giant batch of data. Similarly, the reason that some states (like Florida) have a low death rate is that all visitor deaths are reported by the home state.

One problem some academics have is that they can see how their own field is impacted by whatever the new thing is, but they can't imagine how this will impact other fields. Famously, after WW2, the British government commissioned an academic to decide if these new "computers" would be useful. The academic could easily see how their own particular field would benefit (x-ray crystallography), but couldn't imagine that computers would be useful in any other field.

Wait -- what's all this religious stuff?

Weiner love to talk religion. He's not very good about being particularly coherent. FYI: the sin of simony isn't related to Black Masses.

Where's the golem?

The golem is the Golem of Warsaw. It's mentioned in passing on page 49. Considering that it's the overarching theme, you'd think it would be mentioned a bit more. It's also mentioned on page 95, the conclusion, where it's mentioned once in an attempt to explain why the book is called God and Golem, Inc.

What's with the ", Inc"?

The title is best parsed as "(God) and (Golem, Inc)". For years I've been assuming it was best read as "(God and Golem), Inc". He's comparing the for-profit creators of computing machinery ("Golem, Inc") with God. 



(*) I can hear the "well, actually" crowd now. "Well, actually, the computers are racists, they merely use racist data to implement racist policies that have disproportionate impact on different races in a way that dehumanizes people and creates additional stumbling blocks, but the computers themselves aren't racist". Well, actually, that attitude is bogus.

Tuesday, November 9, 2021

IBM 610 Auto-point: weird 1950's computer

IBM 610 Auto-Point computer (annotated)



Have you ever gone into your pantry, closed your eyes, randomly picked out the first dozen ingredients, and challenged yourself to make a dinner from whatever you grabbed? Well, it sure seems like that's how IBM designed the 610 computer.

The always-awesome bitsavers site has a couple of manuals for the IBM 610 auto-point (an old name for floating-point) computer, including a snazzy brochure and an operations guide. The breathless prose ("arithmetic and logical problems can be solved on the spot") hints of a world of promise, but a peek under the covers shows that this is, in fact, a bit of a monstrosity.

The keyboards

There are two keyboards, which seems like a lot. The one further on the left is called the "typewriter" and is a repurposed electric typewriter (which IBM also made, so they had them in stock). The typewriter is used to print out the results. As a special feature, you could type on the typewriter, and it would type onto the paper. There's no way to type on the typewriter and get it into the computer.

The specialized keyboard on the right is the "console". IBM loved their consoles. It's where you enter in your data, and it's also where you create your programs. The console is not to be confused with the control panel, which is another thing entirely.


The console has 43 keys. There are 11 number keys (0 to 9 and decimal point), plus 7 common math operations (+ - * / square-root convert [change sign] and a combined divide/multiply).  There are 2 blank keys, because why not. The rest of the keys are for controlling the machine, and entering in commands.

Programming the machine

You might be thinking, "what languages does this machine handle". The answer is: take a look at the keyboard. Whatever you can type there, the machine can do. Each possible machine opcode is a single keystroke. That might be nice if this was, say, a Sinclair ZX80 running BASIC. Instead, these are rather bizarre opcodes. Let's divide them up into groups.

I should point out that you can also program the computer via a program punch tape, which just duplicates the keyboard but weirdly, and you can program the computer via the control panel. And they can be mixed together, and the person at the keyboard can always override whatever commands you set up.

Input (control) selection keys (4): KB DTR PTR CP. Says which input device the computer should use for control: keyboard, data tape reader, program tape reader, and control panel. 

Output keys: TYP CR TAB DTP RO. The first three turn on the typewriter, either at the current position, after a carriage-return, or after a tab. DTP turns on the data tape punch. RO will  write the current register out to the selected output -- so to write a number to the typewriter at the current position, you have to do a TYP RO. But this won't work, because RO doesn't really do the auto-point conversion; first you have to do a SL15. The RO will undo the previous SL15, giving a truly weird side-effect.

Register edit keys: CLR CLR-RH COPY SL15 SR15 SL SR. The normal kinds of things like CLR to clear a register, CLR-RH to just clear the right-hand half of the register. SL15 and SR15 are just bizarre, but you have to use them to get output.

Control keys: REL INT  RSM ENT
REL  will drop out of the current operation, and reset the selected register. Interrupt will interrupt the current operation, but if you press it and a particular light goes on, you have to press RSM (resume) until the light goes off. ENT will "prepare the machine to enter data into a register"

Other keys: SEQ A DEL
The A key is used to select the A register. Otherwise, you'd have to select it by number, which is is register 2. DEL will help fix any data entry mistakes. SEQ is special, and deserves a section all to itself.

Lights, more light, other more lights.


The keyboard includes a set of lights that help you figure out what the computer is doing, and a set of "check" lights. 



But wait, there's more. The keyboard also includes a tiny, 2-inch (5 cm) cathode-ray tube (like an LCD screen, but uses more electricity). That screen lets you view the contents of the current register as tiny dots. 

Here's the pattern for "I'm entering the number 22.37".

The actual little numbers 0..9 aren't displayed; you just have to kind of squint and carefully measure where the little dots are. It's not (seemingly) calibrated, and each column can only display one dot. No, you can't display DOOM on this.


The main body of the computer also has lights, this time to tell you what the current program step and current registers are, plus whether the machine is off, on, or really on.

SEQ (Sequence)



Never have I ever read that description and understood it. But I'll try. A "hub" can't be described until "control panel" is described. A control panel is a set of bulk-removable wiring that can customize many kinds of very old IBM machines. Control panels predate computers, which is why they are a so very deeply different. 

If you have, say, a device that reads in punch cards and then prints the results, you might have a control panels with 80 wires, once from each card column that gets read going to one print position. Often they will be "straight", so that column 10 on a card will print into column 10 on the printer. But you can get fancy: you can print only some of the data, or duplicate some columns. And you can "suppress leading zeros" for some set of data, so that if the card is punched as "00020" you can print just the "20", which is often much easier to read. And it gets so, so, so much more complex.

A "hub" can now be described: it emits a pulse, so that you can have sequences of events. Yeah, sorry, not super clear. What can I say: IBM has hundreds of pages about hubs. 

The "machine functions" that the control panel opens up include things like "loops". That's right, writing a program with loops is impossible with just the keyboard; you have to wire it yourself.

 You might also want to program with fancy "if" statements. Those are available when you use the paper tape. The paper tape uses an 8-channel (8-bit) code. The top two bits say what "class" any particular instruction is in -- classes 0, 1, 2 and 3. You can specify which classes of instructions you want to run at any time. Yes, this means you a main body, a "if-else" statement, and a remaining "if" statement, and that's it. But good news: you can interleave the different statements together. 

But wait -- which class gets used? Answer, of course, as with everything about this machine, is that it depends, There's four switches on the manual keyboard, one for each class, and they can be set to "always", "never" and "depends on the programming panel". 

That auto-point isn't really floating point

IBM was really happy with their "auto-point" concept. If you've never used the previous technology -- which would be a "slide rule" -- those devices don't include the magnitude of the number at all. That is, you multiply "1.23" x "6.78" in the exact same way that you multiply "123" x "678" -- you just have to remember where the decimal point is.

With the "auto-point" concept, you get a bunch of registers, each of which can hold some numbers like "1.23" or "6,780". As you enter each number in, when you get to the decimal place, the number will automatically adjust in the machine so that the integer "left side" of the decimal point uses half of your register, and the fractional remainder goes into the right side of the register. 

On the one hand, this is convenient: you don't have to remember where the decimal point goes in your result of 83394. On the other hand, very large and very small numbers are absolutely impossible, and your precision will vary all over the place.

In summary: 

Every single part of the IBM 610 is harder to understand, and weirder, and pointless duplicated, with extra complications thrown in just to try to keep everything kind of working.



Wednesday, October 13, 2021

Learning Typescript, and why I'm not a fan

A work project I'm helping out with uses Typescript, I tried to use it for my extension, and now I just use JavaScript. It's all because TypeScript documentation is bad, the module system is silly, their conversion times are slow, and their target user is 100% not me.

I'm a little bit of a computer language enthusiast, and have been for years. My first intern project was to make a YACC grammar for a Fortran "wirelist" program for Teradyne (hi, Chuck!); I designed and built a technically-oriented terminal-based hypertext system for electric engineering; I created an incredibly simple search language for a game company (technical requirement: must be functional in less than one day, because otherwise we'd have to use my boss's approach, and he was wrong). 

I was enthused by having a reason to jump into Typescript for this project. I 100% love the concept of typescript: it's like JavaScript, but adds in types, so you make fewer mistakes. Who wouldn't like that? I'm not a fan of being all loosey-goosey with naming, and appreciate the little boost that Typescript add. The generated JavaScript code matches well with the original, making debugging easier.

And then in all went wrong. After a successful start, within a day I stopped working on the Typescript source and instead just edited the JavaScript file. 

The compile speeds take me out of the flow. My file is just a few hundred lines long; in JavaScript I can just reload. With TypeScript, you have an awkward pause. The pause is for no technical reason; my files are small, a reasonable program would be able to read it, parse it, and convert it in under a second. (my own current language project is a language converter; my own goal is <1second for a 1K line file)

The module documentation is much to terse. Specifically, if you already know how modules work, and know what you want, then you can understand the module documentation. Otherwise, it fails to provide basic information about what the settings do, and when to use them.

Modules simply emit errors. The goal of Typescript is that it generates working JavaScript. There are two settings for modules: ones that generate non-working JavaScript (the browser sees an import statement and complains that it doesn't know what requires means), and ones that spit out long lists of compiler errors about not finding some package that I'm not asking for (some configuration language).

If your customers are highly motived people then you can get away with badly documented features that generate errors. I'm not that highly motivated, and have an alternative.

Why do I even need modules? Typescript requires modules for two reasons: 

The -watch command that's needed to make compile times acceptable only work with the -build switch and that in turn only works with modules. It would have been nice if I would have just typed tsc file.ts --watch and be done with it. 

As soon as you have two files, you have to have modules. Otherwise, nothing works.

The language documentation is a barrier to understanding. The documentation for Typescript hardly presents an easy onboarding experience. There's pretty much nothing that I found that presents a high-level work flow, or explains their design choices. 

Mathematicians are the bane of computer documentation. I firmly believe that there's a mathematicians brain that some people have, such that they read in equations and very short, very succinct descriptions, and from that generate an entire field. It's actually an awesome ability, and it makes them write completely useless documentation for the rest of us. (Note: I have a degree in mathematics).

Typescript is full of the mathematicians approach: provide a tiny number of words, with no worked-out example, and starting from first principles (which no beginner know) instead of from what starting people need to read.

I wanted typescript to be a powerful new tool in my toolbox for designing programs. Instead, after multiple fruitless hours of trying to make Typescript work within my work-flow, I simply gave up and embraced JavaScript. And it makes me sad

Monday, May 31, 2021

Filtering out distant Bluetooth signals

 TL/DR: nearby Bluetooth devices have a RawSignalStrengthInDbm in the 50s and 60s.

I love playing with Bluetooth devices and writing little apps to control them (including the very special Gopher of Things). One of the hassles with developing, though, is that we're in a sea of Bluetooth devices. Any "watcher" code you write will be inundated with events from everyone else's device (notably their Apple devices which helpfully send lots of Bluetooth advertisements)

So how to filter them out? Step 1 is to look at the RawSignalStrengthInDbm in your Bluetooth watcher's BluetoothLEAdvertisementReceivedEventArgs argument. I did a little experiment: all of the devices I was interested in coding for had a signal strength in the 50's and 60's. Everything in the 80's and higher was noise from the rest of the house.

Note, though, that the strength is in decibels. A strong signal is -50 and a weak signal is 89. To quickly return when the signal strength is too low, do this:

    const int filterLevel = -75;
    if (args.RawSignalStrengthInDBm < filterLevel)
    {
         return;
    }


In my test, this filters out most of the undesired signals.


Wednesday, February 24, 2021

Everything wrong with the FINGER protocol

 Everything wrong with the FINGER protocol 

For those of you who have never heard of it, Finger is one of the old "litle"¹ TCP services. As a user of a big multi-user machine, you can edit the ".plan" file in your directory; people can then run a command like finger person@example.com and it will retrieve your .plan file along with other information like where and when you last logged in. It was a super useful way to coordinate with teammates back in the days before cell phones had been created. 

 The protocol itself is pretty simple: the finger command sends a single line of data with the user name, and the server replies with a bunch of text and then closes the connection. So what could go wrong? In this minor screed, I list both things that should have been known at the time, and also things that we know about protocols today that weren’t known then. 

TL/DR: the spec is wrong, confusing, incorrectly implemented and potentially dangerous. But other than that, it works pretty well :-) 

The protocol spec is incorrect (/W). 

 Firstly, the finger spec, RFC  1288, is wrong. The "BNF" query notation, section 2.3, with query type #1, attempts to allow an optional /W before the user. The /W is the verbose switch (W stands for "whois") and servers can reply with more information when it's provided. (This is accessed by the finger -l person@example.com switch; -l stands for long). But that's not what the BNF actually says. What the BNF says is that the /W switch is required whenever a username is provided. What should be an optional switch into a mandatory one. 

Good news! Every actual finger client implements what the spec tried to say and not what it failed to say. Which is good, because a number of existing (as of February 2021) Finger servers implement the earlier RFC 742, which doesn’t allow the /W switch. 

The protocol BNF is clumsy. 

The protocol “BNF” in general is more formalistic than useful. There’s an old saying that every level of indirection makes code harder to follow; the corresponding saying for BNF is that simple and common definitions like CRLF should be spelled out each time they are used, not hidden behind a layer of naming indirection. The BNF also loves using short name; {C} is the name of the rule that eventually expands to CRLF, and {U} the rule for user names. 

Additionally, the BNF is split into two rules: one for direct user lookup, and one for an indirect network lookup (these are Q1 and Q2 in the BNF). But this makes the Q1 clumsy, as it has to handle both user lookups with no user, and user lookups with a user. A better split would be three query types: a NULL query (with or without a /W), a user query (also with or without a /W) and a network query. 

On-behalf-of is not good networking 

We can totes forgive the original spec from adding in the slightly weird “Q2” format. This format is used when we're asking server “A” to ask server “B” for information. It’s like the user can’t get the information they want directly; they have to go through a gatekeeper server. The other servers are called Remote User Information Program (RUIP). Back in the 1970s when the RFC was created, the internet was often provided to a single computer at a site; the site then used other protocols and network to connect to other computers at the site (hence the internet used to be described as a “network of networks” which were expected to use non-Internet protocols). 

But in modern times, the Q2 “on behalf of” experience isn’t needed. Indeed, none of the servers I found would handle it. 

Massive security issues 

 Finger servers often return the time and location of user logins. For example, FINGER might say that a particular user is currently logged in at a particular terminal in a particular room. This is handy when dealing with friendly teammates, but is totes wrong when dealing with stalkers and worse. Lots of people really don’t want other people to know where they are. 

Giant compat issues with modern servers 

You might be confused by this one – what could I possibly mean about modern Finger servers? Have there even been any modern Finger servers at all? Why would anyone build a new Finger server given that the Finger protocol is often blocked by firewalls and provides very few features needed by people. 

It turns out that just looking on GitHub shows a bunch of different Finger servers. These servers are mostly derived from the original RFC 742 Finger protocol. It’s almost the same as the RFC 1288 Finger, but doesn’t allow for the /W switch. Other servers attempt to handle the /W switch, but don’t do it correctly (finger.farm, for example, failed until recently).  


One more thing about the /W switch spec: case-insensitive

[Later edit]: the RFC set of specs has long declared that just strings in the BNF descriptions should always be assumed to be case-insensitive: "monday" is the same as "Monday" and "MONDAY" and "MoNDAy". The FINGER spec takes the opposite approach: the /W switch, AFAICT, is actually case-sensitive and should always be upper-case.

As a fun aside: the RFC editors are, in the instance, wrong. While I understand why they decided that BNF should be case-insensitive (it's part of our text-based heritage), it's also the case that the workaround they use (specify case-sensitive strings as hex characters) is demonstrably error-prone. I've personally filed about a half-dozen different bugs against Internet protocols for getting the HEX representation of strings wrong.

The best solution is to require each BNF description to say if they are case-sensitive or not.

Use these learning for your own protocols! 

Finger is part of the old tradition of text-based services that are almost designed for direct command-line manipulation. As such, it’s now mostly out of favor (when was the last time you read your email by directly talking to a POP server?). That said, there are still lessons from FINGER for today. 

  • Simple, direct protocol descriptions are easier to debug than complex ones. 
  • Be aware of bad actors. Don't let your APIs enable stalkers and thieves. 
  • Make sure that the easy path for handling your protocol also allows servers an upgrade path. 


 


Note¹: Finger is one of the litte TCP services noted in RFC 848 along with echo, discard, systat, netstat, quotd chargen, finger and a couple of time-related services.