6

Feedback: UIs, Mac pains, hardware, teaching and octal

 3 years ago
source link: http://rachelbythebay.com/w/2020/11/17/feedback/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Feedback: UIs, Mac pains, hardware, teaching and octal

It's time for more reader feedback, and some responses.

Regarding the "type the exact number to continue" post of a few weeks back, an anonymous reader says:

Fun. My favorite related UI trick, from a load-balancer-management tool at $BIGCO. 3 checkboxes for a dangerous action. One: "I understand that this is a dangerous thing and am sure it is safe." Two: "I really do understand ." Three: "I am just clicking checkboxes without really understanding what is going on." If you check box three, you are not going to space today.

That's an interesting one, but I think it would work once or maybe twice before people would program themselves to do it on autopilot.

It's funny, right. Some of these companies employ a non-trivial number of people who use evil human-hating patterns to keep their business alive. They are probably really good at moving things around in order to make you click on an ad you didn't intend to, for example.

Wouldn't it be interesting if they used their skills to keep people from hitting the dangerous button without first having to work for it a little?

Yeah, that'd never happen. Pipe dream on my part.

About my ongoing Apple Thunderbolt woes, another anonymous reader asks:

Great work finding this bug, but just out of curiosity, why would a normal person want to do all that in their day to day work? Isn't this kind of a corner case?

A reproduction case (which is what my post shows) is going to look odd and stilted. It's because you've reduced what is normally a random, messy, unspecific pile of events into a specific sequence that is needed to demonstrate the problem.

In other words, you have the arrow of time backwards. The problem came up organically based on using the machine in a way that I thought was normal. This is what I consider normal for a laptop, by the way:

I use it at my desk, plugged into a monitor, charger, keyboard/mouse, and maybe something that provides an Ethernet connection. Now and then, I get tired of sitting at the desk. Maybe I want to look out the window or watch TV. So, I unplug the laptop and cart it over to the couch and use it from there for a bit.

Later, I want to use the desk to do something that benefits from having a big screen and a real keyboard, so I bring the laptop back. I reconnect it to the same devices... and it promptly shits the bed.

That is the 2020 Apple laptop experience for me right now.

If not for the Twilight Zone making everyone work from home, I'm sure you would be seeing more reports of this. It's a combination of being docked, then put to sleep, then being undocked, and then later being reconnected. If you work at a company that had conference rooms and somewhat regular meetings, you probably did all of these things! You just haven't done it on the new machines, since they started shipping right around the time we all stopped going in to offices.

Just think - it wouldn't be the first time that the way actual people used the device (legitimately) was completely out of whack with the way the testers rolled. Remember the iPhone 4 "antennagate"? What do you want to bet that happened because the test units were squished into plastic cases in order to look like the old model (3GS) and not attract attention? Plastic case + hand is not the same as metal antenna band + hand.

The other thing that really bothers me is that Apple has clearly moved on, what, with the ARM-based machines that have come out. I get the feeling that the 16" MBP and the two 2020 machines (MBP and MBA) are basically hardware orphans now, and they won't give a damn about fixing this situation.

Want to see how bad it is? Check out the pages and pages of comments from people experiencing the same thing on their machines.

Incidentally, there are multiple ways this can fail. Sometimes, the machine doesn't panic, but it DOES refuse to recognize any of the ports behind the dock or monitor. So, if your monitor or dock has an Ethernet port, nope, you're not gonna be able to use it. Got a fancy SD card reader on there? Forget about it. Is your keyboard or mouse behind them? Same deal.

I don't understand why some journalist hasn't picked this up and dragged them over the coals for their brokenness. This is insanity.

Another anonymous question asker inquires about hardware:

I know, I know. You hate hardware. BUT aren't there times where you wish you had a faster CPU for a certain demanding workload? Do you keep up with whose got the best bang for buck in terms of performance (AMD vs. Intel)?

Well, sure, when the need comes up. Way back in 2011 when I was first playing with the software-defined radio stuff, I was a complete idiot about all of that stuff. I had no idea what was a reasonable amount of CPU utilization was for a given workload.

So, when I tried running gr-smartnet to decode a local system on my workstation at the time and sent it into an ACPI thermal trip that started a shutdown (!!!), I thought "I need more CPU for this". That's when I started trying to solve for the multi-dimensional problem of "CPU that's in stock, that works with a motherboard that's in stock, that works with a case that's in stock, that I can grab TODAY at the local store". It was one of those things where it was around 3 or 4 in the morning, and I was waiting for them to open. There's no way I was going to wait to have stuff come in the mail.

I went to Micro Center that day with a list of items, and found the actual stock situation was different. I wound up putting together a different mix of things based on what was on the shelves, and built a new workstation from that pile of stuff. It was significantly faster, and was able to cope with the terrible implementation that was pulling far too much CPU for what it was doing.

Months later, I started understanding this stuff better, and got the CPU utilization cleaned up to where it would run on the old box. But by then, it didn't matter, since the new one was taking care of business.

Basically, outside of the current COVID situation, if there's a time when I can go out, do some amount of shopping, and solve a problem in short order, I'm going to jump at it. That motivation has sent me to Micro Center (back when we had one), Frys, and all kinds of other places too.

Outside of that, there's no reason to track this stuff. I can totally do lazy evaluation on hardware specs at the time I actually need something, and it'll work out just fine.

I get that gearheads love knowing about the newest trilinear room-over-room mipmapping stuff just because it exists, but that's just not my bag and never has been. It has to solve an actual problem first.

The last reader also sent in a question on a different topic in the same comment, so I've split it out here:

By the way, I'm not flattering you but have you thought of creating a Udemy course? Heck, you could create a course on your website teaching people what you know and make it donationware (I know that's kinda unfair for years of heard learned experience but economically it might bring in more money by lowering the barrier of entry) or you could charge a nominal fee. Make the first few lectures free for all and just when people are enjoying the learning process and dying to find out what comes next, slam them with a paywall! I know, sounds evil but you gotta live, right? Stay well!

Shortest answer: I have not thought of creating a Udemy course.

Longer answer: I've thought about the possibility of teaching people, and have actually done some ad-hoc long-distance tutoring and mentoring (recently, even), but have never really thought about a class in the current context.

It's weird, because I used to teach an hour-long class at the big blue cat picture factory to all of the new employees about what to do when stuff breaks in production, and how various (in)famous outages had been handled in the past. It was easily the highlight of my day any time I did it, and the students seemed to dig it as well. I liked to think it gave them a good start on the way things used to work there.

It would take a fair bit of fancy footwork to do a class like that apart from any particular company's environment. I end up having to dance around the problem of which company had which outage. People like to think they know which one is which, and sometimes they're right, but the truth is that the general case keeps happening at multiple companies.

Add a new inter-building link, fat finger the netmask as /3 instead of /30, and eat traffic for 1/8 of the Internet? Multiple places. Totally.

Another option in terms of classes would be something like "how I develop stuff". That assumes that anyone actually cares about the specifics of how I do things, and whether the outcomes are actually wanted by anyone. How many people are down to use C++ in a regular text editor and just build stuff up from there? I don't do it (not using an IDE) to show off - it's just that I've kept doing what kept working for me.

Similarly, how about the whole thing about going through and building a code base where every single bit of negativity and paranoia about worst-case scenarios that I could imagine has been jammed into it? I don't know that anyone would want to take a class like that. I know from recent experience that a lot of people have no interest in checking return values. To them, it's "messy".

To me, it's "this is literally the job that we do". The job is not about the happy path. It's about the unhappy path where something goes wrong, because, well, something always goes wrong somewhere.

There's probably a whole post waiting to be written on this general topic. In short, the bar for software quality in terms of what coworkers, managers and even customers will accept is incredibly low. People put up with all kinds of shit even after paying hundreds or thousands of dollars on something.

"It does that", they say, while I grind off another layer of enamel from my teeth. So frustrating.

Okay, I got off-track there. Such is the nature of responding to feedback, particularly when it's out here in a public post and I have no particular agenda or topic lined up ahead of time.

I'm going to have to ponder this one some more. It seems like there might be some point to having an IRC-type chat environment where a few of us could get on there and I'd cover topics and explain things on the fly, and maybe paste in links to code snippets or something like that.

Think about it and let me know if any of this seems interesting to you.

Coming from another feedback post, there was a comment about "octal dates": where January through July work fine, but then August is 08 and OH boy. One reader responded to that:

This actually ended up biting me specifying IP addresses at one point (the curses interface presented the IP address field to fill out as "000.000.000.000" and I wanted to (essentially) put in "10.110.220.102" and not blanking the initial zero, I ended up with an IP address in 8/8 instead of 10/8. That took a fair bit of head.scratching to figure out (and, if anyone cares about the machine / OS in question, it was a DIAB DS/90-31, running D-NIX 5.something, and it was the kernel configurator (because that is TOTALLY the place to specify the hostname and IP address, right?)

Ouch! This has shades of the whole flexibility in addressing that certain systems have by way of their inet_aton() implementations.

I remember having to do miserable things to get the old "MacTCP" control panel thing going on Macs circa 1994. For some reason, I couldn't just type in a normal a.b.c.d type IP address, and had to feed it a "network number" that was effectively "a" bitshifted left 24 bits + "b" bitshifted left 16 bits + "c" bitshifted left 8 bits. Awful. Just awful.

That's it for now. Thanks for writing in!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK