thinlens

Notes on life and tech by Abraham Neben

As a math and science nerd, I wasn't the biggest fan of high school English. But when we were assigned to write Shakespearean sonnets I had so much fun writing one that it has stuck in my memory after all these years.

A ha! I thought as inspiration struck, 'Twas after days spent searching for a line, And constant feelings I was stuck, I finally found a word that sounded fine!

Alas each time a word I wrote seemed clear, I found it didn't fit the sonnet's rhyme, Or else it simply made the poem sound queer, Assigning sonnets was a dreadful crime.

I sat there at my Intel Mac machine, And tried to write this sonnet for an 'A', I thought and thought and stared right at the screen, Perhaps I'll just give Boly a bouquet,

There is but one thing now of which I'm sure, Assignments of this kind I must endure.

Posted by Abraham

This is a reference for myself of some of the new VS Code shortcuts I'm learning to further reduce hand movement while editing.

Macros

I installed the Keyboard Macro Beta extension to enable Vim-like macros. This allows you to record, then play back (typically single-line) edits. * Cmd F9 => Start/finish macro recording * Cmd F10 => Play back macro

Cursor movement

  • Mod-U-Left/Right => Jump to the next word
  • Mod-I-Left/Right => Jump to the next word part, typically the next section of the word separated by _

Multi-line edits

  • Mod-P-Up/Down => Add cursor above/below.
  • Mod-P-PageUp/PageDown => Add cursors to top/bottom. (Multiline cursor can be used to play back a macro on multiple lines)
  • Cmd-L => Convert highlights to selection

Posted by Abraham

I've had recurring tendinitis in my right hand for several years, and it has lingered even after several rounds of physical therapy. On reflection, this is because, until recently, I never addressed the specific motions triggering the irritation. Last month I decided to pay closer attention to my body for a few days, and I concluded the irritating motions are

(1) Hitting enter with my right pinky (2) Moving my right hand between the typing position and the arrow keys. (3) Moving my right hand between the typing position and my external mouse.

I addressed (1) by remapping Caps Lock to Enter, allowing me to hit enter with me left pinky. This is much more ergonomic given that the left pinky is adjacent to Cap Lock, whereas the right pinky needs to cross the " key. I used Karabiner Elements to make this change on both the internal laptop keyboard and my external keyboard.

Another low cost improvement was to train myself to hit the space bar with my left thumb instead of my right thumb. The idea was to further reduce overall muscle activation in my right hand.

Motions (2) and (3) were a bit more difficult to address. Indeed programming often requires repeated brief spurts of mousing, typing, and arrowing. To reduce these motions, I have been exploring new ways to work using just the keyboard. So I set up more mappings in Karabiner Elements to use Control+I/J/K/L as UP/LEFT/DOWN/RIGHT. These key combinations pair perfectly with option and/or shift for word jumping or selecting.

To reduce mouse use, I've been taking notes on useful keyboard shortcuts in the apps I use, and customizing where necessary. VS Code has many built in shortcuts (and all are customizable). Jupyter notebooks have many, though I really only need to know how to move between edit mode and command mode, and how to add/delete/run cells. Magnet lets you customize the extensions to move and scale windows to different grid cells.

Chrome was my last mouse-heavy application until I discovered Vimium C, which lets you navigate with the keyboard. Hit f and it shows a two letter code next to (almost) every clickable element on the page. Type the code of the desired link and it will click it. Many websites these days use javascript clickable elements. Sometimes Vimium recognizes them, but sometimes it doesn't. But overall it probably reduces my mouse usage by 75% when browsing the web. (Interestingly, this extension seems to be a fork of Vimium, which doesn't recognize javascript clickable elements at all).

These strategies have really reduced irritation in my right wrist, and improved my programming efficiency. But I recently upped my game with the Ultimate Hacking Keyboard, which I'll discuss later.

Posted by Abraham

Preliminaries

The network topology is as follows:

Fiber optic modem <=> TP Link Router <=> TP Link PoE switch <=> 2 Ubiquiti wireless APs

Hard-wire the Pi to the switch, and assign it a static IP address from the router.

Setup UniFi Controller in a docker container on raspberry pi

  • Note, this configuration should be done on a Mac that is hard-wired to the switch
  • SSH to the Pi and install docker (following this)

  • Install the UniFi docker image as below

# Based on instructions at https://hub.docker.com/r/jacobalberty/unifi

# set up directories
mkdir -p unifi/data
mkdir -p unifi/log

# install unifi container
sudo docker pull jacobalberty/unifi

# run unifi container
sudo docker run -d --init \
--restart=unless-stopped \
-p 8080:8080 -p 8443:8443 -p 3478:3478/udp \
-v ~/unifi:/unifi \
--user unifi \
--name unifi \
jacobalberty/unifi

# confirm that unifi is running
sudo docker ps
  • Go to https://ADDRESS:8443/, where ADDRESS is the IP of the Pi assigned earlier (it should match the IP shown with hostname -I)
  • In the UniFi web GUI, either set up a new network or restore from a backup
  • To get the Ubiquiti APs to show up in the UniFi GUI:
    • Reset each Ubiquiti AP by pressing the reset button on the back for >10 seconds (until lights flash), then power cycle
    • In the web GUI, check “Override” next to “Inform Host” in Settings => System => Advanced, and enter the IP of the raspberry pi
    • SSH into each Ubiquiti AP (username: ubnt, password: ubnt) and run set-inform http://ADDRESS:8080/inform
      • Find the IPs of the APs by noting the MAC address on the back of each AP, and matching to the MAC addresses in the DHCP client list in the router’s online GUI at 192.168.0.1
    • Then in the UniFi web GUI, the APs should appear. Click “Adopt”
    • Note after adoption, the ubnt/ubnt credentials no longer work. Instead use the username/password in the Ubiquiti administration interface under Settings → System → Advanced → Device Authentication

Wifi performance

  • Set 2.4/5GHz channel widths to 40/80MHz
  • See summary of UniFi advanced settings here

Misc Notes

Note if running the docker UniFi container on a mac, go to https://host.docker.internal:8443/ in the browser (make sure 127.0.0.1   host.docker.internal is in /etc/hosts)

Posted by Abraham

keyboard|690x380 (I initially posted this on the UHK forum, but wanted to post it here as well for posterity)

After spending a couple weeks optimizing the layers and modifier keys of my Ultimate Hacking Keyboard (which I love, btw), I needed to print some new keycaps to reflect the new layout.

Generating the key cap STLs

I generated STL models of the needed keys using the open source KeyV2 model for OpenSCAD. I made the following tweaks to the model:

  • Set “key profile” to OEM.
  • Set “key length” and “row” (in the code panel) as needed, per UHK's spec.
  • Set “stem type” to rounded_cherry with a “stem slop” of 0.2, this gave a snug fit for my brown tactile switches.
  • Set “inverted dish” to true, because I prefer all the modifier keys to have a convex top
  • Set “wall thickness” to 2
  • Do not add any label text in the “legend field”, we will add this later in the 3D slicer.

Adding labels in the slicer I am somewhat new to 3D printing, so this probably isn't the most efficient workflow, but it did give nice results. I have a Bambu X1 3D printer so I use the Bambu slicer. Load the STL for a key into the program, then use the Auto Orient button to align the front face of the key cap with the print plane. Then manually rotate the model by 180deg so the front face of the key faces up.

step1 Large|690x394

Read more...

Reposted from my old blog.

Fall colors shine on the dreariest days. Indeed nature photographers love cloudy days for the same reason portrait photographers use light diffusers. Clouds produce a warmer, softer light than direct sunlight, making reds, greens, and yellows pop, and lighting up nooks and crannies everywhere in the image. I took this photo last weekend in East Rock Park in New Haven after a half hour walk in the rain along Mill River, wishing I'd worn gloves. I really liked this view of one of the ridges with the new fall colors blooming up from the bottom of the frame.

#photo

Posted by Abraham

Reposted from my old blog.

From undergrad through my PhD, I learned physics in traditional lecture courses, and I very much enjoyed it. Over the years, whenever I’ve heard tell of the magic of active learning it’s been like nails on a chalkboard to me. I originally set out to write a whole-hearted defense the traditional lecture while violently skewering active learning, however in reading some of the active learning literature, I came to see they make some good points, often with a great deal of data on their side. Based on my experience as a TA, I still believe that active learning approaches like MIT’s TEAL introduce more problems than they solve, but I’m convinced it’s worth at least engaging with the issue. This first post is about how active learning was introduced to physics. The second will be about how it works in practice.

Intro physics is hard, even at the statiest of state schools. Why are these first physics courses so challenging? In many subjects, students might begin a course knowing very little about the material, but in physics, Halloun and Hestenes (HH) argue that beginners know a great deal about the topic; the problem is everything they “know” is wrong. In a pair of extremely well cited papers ([1] and [2]), HH argue that students generally begin intro physics courses with strongly held essentially medieval beliefs about kinematics and dynamics, and traditional lecture courses are unable disendow them of these misconceptions. The implication is that students learn to perform calculations about falling projectiles and inclined planes using F=ma and x=½ a t² and E = ½ m v², but they are unable to integrate these concepts with their original common sense, leading to poor conceptual understanding when asked questions which require more than grasping for formulas.

These finding ignited efforts at Harvard, MIT, ASU, and many other universities to reimagine Physics 101 to promote active instead of passive learning. In 1990, after years of polished lecturing for Harvard’s Physics 11, Eric Mazur asked his students a conceptual question off the cuff, and received only blank stares [3]. “How should I answer these questions according to what you taught me”, a student asked. Discouraged, he asked students to discuss, and within just a few minutes, students agreed on the correct answer. Mazur has become one of the leaders of the active learning movement in physics, and MIT has formalized these techniques into a course known as TEAL: Technology-Enabled Active Learning.

Compared to a lecture course, TEAL seems downright bizarre. Students sit in small groups around circular tables in a specially constructed room with white boards covering the walls. Instructors lecture from PowerPoint slides interspersed with conceptual questions, demos, and small group problem-solving sessions. Trials showed both high- and low-achievers taking the TEAL course learned more than their peers taking traditional lecture courses [4]. Based on these results, TEAL physics has become mandatory for all MIT freshman (except those placing into the advanced track). Clearly MIT should be lauded for its pursuit of better teaching and learning.

But goals are one thing, reality is very much another. In my next post, I’ll discuss the good, the bad, and the ugly of active learning put into practice, based on my experience as a TA.

[1] Halloun, I. A. and Hestenes, D. (1985). The initial knowledge state of college physics students. American Journal of Physics, 53, 1043. [2] Halloun, I. A. and Hestenes, D. (1985). Common sense concepts about motion. American Journal of Physics, 53, 1056. [3] Lambert C. (2012) Twilight of the lecture, Harvard Magazine, March-April 2012 [4] Dori, Y. J., Belcher, J. (2005). How does technology-enabled active learning affect undergraduate students’ understanding of electromagnetism concepts? The journal of the learning sciences, 14(2), 243–279.

Posted by Abraham

I've used a Dell U2413 monitor for years with my MacBook Pro, and always connected to the monitor's DisplayPort input (using this cable). This is a low-DPI monitor (non-retina), but with font-smoothing in macOS I always found the clarity to be more than adequate. However, when I ordered an Anker USB hub with a separate HDMI port, I decided to connect to the monitor using an HDMI cable (this one) in order to free up another USB-C port. This worked, but it seemed to disable macOS's font smoothing, making the text appear really jagged.

Here are some close-up photos of the monitor that show the difference.

Jagged, hard to read text on monitor plugged in via HDMI. Jagged, hard to read text on Dell U2413 display plugged in via HDMI.

Smooth, easy to read text on monitor plugged in via DisplayPort. Smooth, easy to read text on on Dell U2413 display plugged in via DisplayPort.

This has been true in both Catalina and Big Sur, I don't think I tested it on earlier OS versions. I tried forcing macOS to re-enable font smoothing following this, but no change. So in lieu of buying a new monitor, I've reverted to my DisplayPort to USB-C cable.

#tech

Posted by Abraham

I'm hesitating upgrading to an M1 Mac because I hear it runs much cooler (eg see Gruber's review) and I'm worried that Diddy won't nap on my computer anymore. He absolutely loves the toasty i9 CPU on my 16in Mac Book Pro! If you are a cat, do you like the new Apple silicon Macs?

#tech

Posted by Abraham

Reposted from my old blog.

“I don’t use HDR, I photograph what I see,” a photographer explained to me in a high end gallery. So goes the refrain of photographers who don’t understand the purpose, or the power of High Dynamic Range (HDR) techniques. They are wrong. By not using HDR, they photograph what the camera sees, not what their eyes see. Divorced from our creative vision, what the camera sees is meaningless and often far different from what we perceive. Photography is an artistic enterprise, and there are always many technically correct photos of any given scene (not to mention those artsy in their technical incorrectness).

Above is a photograph of a yellow sun on a bright blue sky, at least that’s what I saw with my eyes. Many images of this scene were possible, but I chose to saturate the sun at the bright end and the sky on the black end to show that the scene has far more dynamic range (range of brightness) than the camera sensor can capture. You might argue that this is a technically incorrect image because all pixels are saturated, on the other hand an image which captures the blue sky would saturate the sun even worse!

Still it is hard to imagine that pure mimicry of human perception constitutes anything resembling art. Humans see a narrow band of the electromagnetic spectrum, from 400nm (blue) to 700nm (red) for the very good reason that this is exactly the range of frequencies that our sun produces. On the other hand, the exact way we perceive brightness and color is the result of the evolutionary vagaries of our visual system; there is no artistic choice being made by our brains when we look at a sunset. We perceive the particular set of colors that we do because we have different types of retinal cells (cones) sensitive to different sub-ranges of this band of wavelengths, and we see brightness with a different type of cell (rods) sensitive to total intensity with much higher dynamic range our cameras.

Our eyes effectively burn and dodge over dark and bright areas; the darker regions of the image formed on the retina rely more on rod response as these cells are more sensitive, and the pupil will automatically dilate to mitigate any excessively bright regions. Further, psychological image perception mechanisms further help us make sense of what our eyes see by filling the scene between our narrow regions of focus.

Indeed an art beholden to exactly what we can see seems unnecessarily limiting. Astronomers know this well. The universe is filled with all sorts of radiation, from radio waves to visible light to x-rays and gamma raws, and even cosmic rays, neutrinos, and gravitational waves (to name just a few). All must be synthesized to develop a complete picture of the cosmos. Sure, an image of the world at radio frequencies would look bizarre in comparison to how we perceive it with our eyes, but is it less real? Infrared photography comes to mind as a less extreme example.

Returning to HDR, I do empathize with the critiques, but I think the only justifiable one is that that HDR has simply become a throwaway image enhancement devoid of any artistic meaning. It’s not a less real expression of the scene, simply one that often substitutes for anything deeper.

#photo

Posted by Abraham