iPad Review

Posted on | | Leave a Comment on iPad Review

For the past year and change I used a 32GB non-pro 7th/8th generation iPad for almost all of my personal computing needs. The setup worked really well and I’ve come to really like the device. This somewhat surprised me since I love fast computers and big monitors. So here’s a review of my experiences.

Before the iPad I used a Lenovo X1 Tablet 3rd Gen which is a wonderful device too: very fast (I had the fastest i7-8650U processor), dual Thunderbolt ports and an outstanding 3:2 aspect-ratio high-res screen. Unfortunately it mysteriously stopped working in the summer of 2020 right after the warranty expired.

New M1 MacBooks were expected that fall and since I’ve been drifting into the Apple ecosystem over the past decade I decided to wait to buy a replacement for the ThinkPad and use the iPad to tide me over me over for a couple months (it was a work-gift we had laying around). The first M1 MacBooks came out in November ‘20, but I was somehow underwhelmed—I don’t remember exactly why—maybe something with the limited external monitor support. So I decided to hold out for the follow-up M1-CPU MacBook Pros and the iPad soldiered on. The new MacBook Pros turned out to be super expensive so this winter I ordered what I should have gotten when they came out: a standard M1 MacBook Air.

Ultimately I think I’ll look back on this iPad-period as a Walden-Pond-kinda time where I successfully limited myself to a really constraining computing device, learned some things and then returned to using more powerful and ultimately more productive computers.

Hardware

First off: the base iPad model is a great device. I almost always use it with an Apple Smart Keyboard attached. The Smart Keyboard compares poorly with a ThinkPad keyboard but it’s surprisingly useful and I can type pretty quickly in relative comfort. The big drawbacks are lack of backlighting and the need for a stable typing surface (i.e. no good reclining on couch). If I was going to continue to use the iPad as my main device I’d try out the Logitech Combo Touch.

Some aspects of using multiple keyboard layouts (Danish and US in my case) and the interaction with keyboard shortcuts is not really thought through. For example the Safari tab-cycling shortcuts are cmd+shift+[ and cmd+shift+] which is fine on the US layout. It doesn’t make sense for the Danish one though, and there’s no keyboard shortcut for changing the layout. The workaround if I’m stuck on the Danish layout and want to switch Safari tabs requires clicking the Safari address bar (bringing up they on-screen keyboard controls), selecting US keyboard layout and then cycling through tabs. Not great.

I got a 1st generation Apple Pencil for good measure, just to see how much utility I could get with a full complement of input devices. Besides a few home DIY projects sketches I use the pencil to draw with my 2-year old son, and it’s great for that. No paper, no mess, easy storing and cataloging of drawings. We draw together in the Notes app, but I’d like to learn ProCreate. The pencil is also a nice alternative to finger input because it keeps smudges off the screen. I use a pencil-holder attached to the keyboard cover to hold the non-magnetic 1st gen pencil, which works well. 

iPad with Smart Keyboard and Pencil

The hardware design for the standard iPad feels dated. It’s not much changed from the very first iPad: Home button, big bezels, curved back. The 10.2” screen is small, but fine for single-window use in landscape mode. The USB 2 interface on the Lightning port also feels incredibly dated and it makes moving files using flash drives or local 1Gbs networking a drag.

The two worst aspects of the 7th/8th generation iPads were both fixed in the new 9th generation model (which I haven’t tried): The poor front-facing FaceTime camera and the cramped 32GB base storage model. I never take FaceTime calls (with my parents, for example, for them to talk to their grandson) on the iPad even though it has a bigger screen than my iPhone — the camera is just too crummy. For comparison I gifted my parents an iPad Pro, primarily to get them the Center Stage FaceTime feature, and it’s great. 

The 32GB storage situation is manageable, if annoying. The problem is not that I have 32GB worth of apps, it’s that Photos hoovers up any residual storage with cached images from my iCloud photo library. I wish there was a way to make Photos more aggressively offload images to free up device storage, or set a limit to the amount of device storage used by the Photos app.

Battery life is pretty OK and certainly better than Windows laptops. On weekends (when I tend to use the iPad a lot) I have to stay on top of charging at night or it’ll run low. Being used to newer iPhones, one thing to note is the lack of fast-charging: Trying to quickly top up the battery with a 30min charge doesn’t make a difference, the iPad has to more-or-less soak up a charge overnight.

iPadOS 15

I got onto the iPadOS 15 public betas as soon as they became available and I suspect that contributed to my positive experience (I never really used an iPad in earnest before, so can’t compare with earlier iPadOS versions). While the iPad screen is too small for actual multi-window use, the improved keyboard shortcuts and widget support probably helped make the iPad a more tenable laptop replacement. 

Speaking of keyboard shortcuts, one thing that still bugs me is the semantics of cmd+tab on iPadOS. Coming from full desktop operating systems I have deep muscle memory and always use alt+tab/win+tab/cmd+tab to switch between currently open apps. On iPadOS, however, cmd+tab doesn’t go through “currently open apps”, instead it is apps that the OS hasn’t swapped out for one reason or another. This seems arbitrary and, dare I say, un-Apple-like. A more charitable interpretation is that the cmd+tab app list is “apps that you can switch to really fast” (i.e. without the app having to swap back in) but even that is not really satisfactory: I don’t alt-tab around to find an app I can get going really quickly, I alt-tab to get to some app that I was using a minute ago and now need to use again, whether I have to wait for it to swap in or not.

The upshot is that I tend to use Spotlight search when switching apps which is not lightning-fast, but fine. The 🌐+⬆️ app switcher is also “fine” but not great for keyboard only use, partly because the default focus is on the upper-left corner app, which comes to what I was doing 6 apps back. I’m trying to make myself use 🌐+⬅️/➡️ which is actually fast (until you get to an app that has to swap in). It doesn’t give you an immediate overview of the list of apps you’re tab’ing through though, instead you’re just stumbling along, hoping that the next keypress produces the app you want.

I guess the ultimate cause of this difference is the auto-app-closing behavior in iPadOS. The equivalent in “normal” operating systems is memory being paged to disk, which seems to result in a more seamless user experience, but maybe I just haven’t used a low-memory Windows machine in a long time and it turns out that the iPadOS behavior of almost-fully closing apps that no longer fit in memory is the optimal one (also for battery life).

The base-model iPads have 3GB memory which is probably the right tradeoff to produce a ~$300 tablet, but looking at Marques Brownlee skimming through open apps on a 16GB iPad Pro it sure looks like more memory helps since apps almost never have to be closed by iPadOS.

Apps

iPadOS has had full Safari since version 13 and using an app versus a web-app is often a wash in my experience. Especially because some apps don’t support affordances like selecting and copying text that you’re used to on the web. I still use plenty of apps though, so I wanted to call out some of the high-and-lowlights.

I’m not the first one to point this out, but the Google productivity app suite (Docs, Sheets) is not good on iPadOS and you’re almost always better off using those services with Safari (or Chrome maybe, I didn’t try). I can understand why iPadOS is not a priority for Google, but it’s really a shame since those are otherwise such great services.

I can also understand why Amazon doesn’t prioritize the Apple ecosystem, but Amazon Fresh missing from the Amazon iPadOS app is really frustrating, especially since it’s in the iOS app.

The built-in Apple Files app is terrible. Trying to copy large files around or transfer them over the network will reliably cause the whole iPad to grind to a halt, the only action I’ve found to do that. It’s like Apple ended up in some strange middle-ground between the paradigms of strong on-device processing and cloud-first data storage, and the result is frustration. The file transfer problem is particularly frustrating because it’s been adequately solved on literally all other computing platforms I’ve used in the past decade. I tried a handful of 3rd party file apps but they either sucked in different ways or came with adds and other cruft.

The Microsoft Remote Desktop app is great. Part of the reason I could make do with just an iPad for so long was definitely that I had a full Windows VM on my home server that I could access to do the few things that just didn’t work on the iPad. The Remote Desktop app mostly bridges the iPad/Windows divide with sane defaults for what keys and keyboard shortcuts do and clearly a lot of thought and care has gone into making that seamless. The contrast with remote desktop’ing into a Mac is stark. Apple provides no built-in way to do this (from an iPad) and the assorted VNC-based apps and tools in the Apple ecosystem compare very poorly with Microsoft Remote Desktop.

The Photos app is good but as mentioned above its tendency to gobble up storage is a problem. There’s a very strange incompatibility where videos shot on my iPhone 12 Pro Max in 4K 30fps cannot be edited in the Photos app on the iPad. iMovie on the same iPad can edit those videos just fine, so it’s not a hardware or codec limitation. I prefer cropping videos on the iPad over my phone, but having to install 750MB of iMovie (which is also more cumbersome for simple things than editing right in Photos) ruins that.

Editing videos shot on iPhone doesn’t work in Photos app on iPad

Summary

So would I recommend a base-model iPad for your main computing device? No, not if you like “normal” computers and can afford a decent Mac or Windows laptop. And I’m not sure I’d feel different if I had tried this experiment with an iPad Pro, the OS is just too quirky and limited. But for less than $300 these iPads are incredible devices and I heartily recommend them to anyone than can live within the constraints or for whom the constraints (and the improved reliability and stability you get as a result) are actually a benefit.

Ecobee3 Lite, two wires and fan-only

Posted on | | Leave a Comment on Ecobee3 Lite, two wires and fan-only

This post covers how I upgraded our home thermostat from a battery-powered two-wire setup to an Ecobee3 Lite supporting both heating and fan-only modes. I wanted the fan-only mode to circulate air in our two-level condo where hot days often result in a hot and stale 2nd floor and a frigid 1st floor.

Note that I’m neither an electrician nor an HVAC pro and it’s very possible that what I did is a very bad idea. But it worked for me, so I thought I’d share.

Our house only has two wires running from the thermostat mount in the condo to the furnace in the garage, just enough to complete an electrical on/off circuit used to tell the furnace whether to heat or not. This is the dreaded “no C-wire” situation with no way to power a smart thermostat and no way for the thermostat to tell the furnace to just run the air circulation fan. Our furnace is relatively modern and has more wire terminals, but running additional wires from the condo down to the garage was not really an option.

Two wires 🙁

To overcome this I bought two items:

  • A 24V transformer that’s plugged into an outlet near the thermostat mount inside the condo. This powers the Ecobee
  • A Fast-Stat Model 1000. This gizmo consists of sender (inside) and receiver (furnace) components. It works by multiplexing additional control signals (for fan-only, in my case) over the single installed wire-pair. Higher-model-number Fast-Stats can provide more virtualized wires, but I just needed one

The first step was to install the Fast-Stat. It comes with easy-to-follow instructions and wiring it into our furnace’s clearly labeled bread-board-like circuit board was relatively simple.

Furnace-side Fast-Stat install

With the Fast-Stat installed I could run both heating and fan-only modes using the old dumb thermostat, validating that it’s working correctly.

Next, I mounted the new Ecobee and wired it up with the wires from the Fast-Stat and from the 24V transformer. The first time I did this, I got it wrong. I wired the transformer wires to C (“Common”) and R(c) (for “Red-Cooling”, I believe) and put the black wire in R(h) (for “Red-Heating”). I guess I thought that the Ecobee wanted it that way because it’s going to be running the heating system (hence R(h)) and the transformer instructions said to connect to C and R(c) wires.

With that ready the Ecobee turned on fine and all the wires showed up in the Ecobee configuration interface. Heating even worked! I couldn’t make the Ecobee run the fan-only mode, however, and at this point I actually gave up on fan-only for a couple of months, happy that I could at least control heating using the fancy new smart thermostat.

This weekend I had a chance to fiddle with the thermostat some more, and managed to get everything working. First I tried just connecting the G (“Green”) terminal (which runs just the fan) to R(h) with a piece of wire and the fan duly started whooshing air around. This was not surprising since the old dumb thermostat could to that too, but at least it showed that the wiring and connections on the Ecobee mount were OK.

Then I tried simply reversing the inputs to the R(c) and R(h) terminals so the transformer wire went to R(h) and the furnace control wire to R(c). In that configuration the Ecobee wasn’t getting any power and wouldn’t turn on. The breakthrough was to simply jam both the transformer and the furnace control wire into the R(c) terminal of the Ecobee mount. Re-reading the Ecobee instructions that makes some sense because the Ecobee wants to always use the R(c) terminal for systems with only one R-wire.

Working setup

In spite of much googling I never found complete instructions for combining a 24V transformer and a Fast-Stat to make an Ecobee work for both heating and fan-only with a two-wire system. I hope this post helps others with the same setup.

Building Podnanza: an ASP.NET Core API on AWS Lambda

Posted on | | Leave a Comment on Building Podnanza: an ASP.NET Core API on AWS Lambda

Podnanza is a simple screen-scraper/feed-generator that I built for my own amusement to podcast shows from Danish Radio’s (DR) Bonanza archive. Check out the Podnanza announcement post for details. This post describes how Podnanza was built using ASP.NET Core running on AWS Lambda. The Podnanza source code is on GitHub.

I’ll start by admitting that AWS Lambda is the wrong technical architecture for Podnanza. Nothing’s wrong with Lambda, but Podnanza is a set of very static RSS feeds: The shows are from DR’s archive and don’t change or get new episodes. A simpler Podnanza implementation would have been a static-site generator that scraped the archive and put the XML RSS feed files in AWS S3.

I opted for Lambda for the very bad reason that I wanted to learn about serverless/function-based development by implementing a “real” project, and Podnanza was the realest small-size idea on my mind at the time. At least it’ll only be me that has to deal with maintenance of the over-complicated setup.

FaaS and HTTP Apps

Working (as I do) on PaaS/FaaS/Serverless products one might encounter arguments like:

FaaS is event-based programming and HTTP requests can be thought of as events. If a PaaS platform has good autoscaling and scale-to-zero (idling) then separate FaaS-features are not needed—people should just build FaaS apps as normal HTTP services.

Or the other way around:

If we have FaaS and event-based programming, why would we also support long-running processes for serving HTTP requests? People should just build HTTP apps from FaaS features since dealing with HTTP requests is an example of handling events

In the abstract, both of these of these statements are correct but they also obscure a lot of useful nuances. For example, even the slickest HTTP app platform pushes some HTTP handling overhead onto developers. Programs that only have to accept events through an interface defined in an SDK maintained by the FaaS platform can be a lot simpler than programs dealing with HTTP, even when a HTTP endpoint is only required for ingesting events. And because event-handling is a more constrained problem than general HTTP, platform-provided tooling such as SDKs and test-mocks can be more targeted and effective.

Similarly, forcing all HTTP apps to be built by handling events coming through a FaaS platform event interface is not ideal either:

  • Lots of apps have already been built using HTTP frameworks like Node.js Express, and those apps would have to be rewritten to conform to the event interface
  • Many developers are very experienced and productive building HTTP apps using existing HTTP frameworks and it’s not worth it for them to ditch those frameworks for an event-based HTTP model, even if it comes with slightly reduced management overhead
  • FaaS interfaces are still largely proprietary and platform-specific, causing lock-in (although middleware like the Serverless Framework can help mitigate that). HTTP apps, on the other hand, can run anywhere

ASP.NET Core on AWS Lambda

With all that out of the way, let’s look at how AWS made ASP.NET Core respond to HTTP requests on Lambda. Spoiler alert: It’s a pretty clever blend of the two dogmas outlined above.

Generally serverless “web apps” or APIs are built with Lambda by using an AWS API Gateway (optionally combined with CloudFront for CDN and S3 for static assets) that sends API Gateway Message Events to a Lambda function. The events are basically JSON-formatted HTTP requests, and the HTTP “response” emitted by the function is also JSON formatted. Building a serverless .NET web app on top of that would be pretty frustrating for anyone familiar with ASP.NET because all of the HTTP, MVC, routing and other tooling in ASP.NET would not work.

But here’s the genius: Because the ASP.NET Core framework is fairly well-factored AWS was able to build a HTTP request pipeline frontend (Amazon.Lambda.AspNetCoreServer) that marshals API Gateway Message Events and feeds them into the rest of ASP.NET Core as if they were normal HTTP requests (which, of course, they were before the AWS API Gateway messed them up and turned them into JSON). The AWS blog post has more details and also diagrams (reproduced below) showing the two execution models.

Normal Flow
ASP.NET Core standard HTTP pipeline (source)
Serverless Flow
ASP.NET Core Lambda HTTP Pipeline (source)

The result is that ASP.NET Core web apps can be debugged and tested locally using the “standard” IIS/Kestrel-based pipeline and then built and deployed using the Amazon.Lambda.AspNetCoreServer based pipeline for production deploys to AWS Lambda. AWS even ships Visual Studio plugins and dotnet new templates that make getting started simple.

While neat, the Lambda approach completely ignores the ideal of dev/prod parity and the execution framework during local testing (with IIS/Kestrel) is very different from the production environment. Somewhat to my surprise I encountered zero problems or abstraction-leaks with the exotic HTTP setup when building and evolving Podnanza, but I suspect that more complex apps that make fuller use of HTTP semantics might see oddities.

Summary

Podnanza has been running without a hitch on AWS Lambda for more than 6 months at the time this post was written, typically costing around $0.20/month including CloudFront and API Gateway use. I’ve pushed multiple tweaks and improvements without issue during that time, always using the dotnet lambda package. On a side-note I admire the AWS .NET team’s zeal in building the Lambda deploy flow into the dotnet tool, but I wonder if it would have made more sense to just add it to the aws CLI that developers use to complete other AWS tasks. Also note that I haven’t built any CI/CD or GitHub-based deployment flow since it’s just me working on and deploying Podnanza. Maybe improving that would be a good way to learn about GitHub Actions

Podnanza: Podcast Feeds for Danish Radio (DR) Bonanza Archive

Posted on | | 5 Comments on Podnanza: Podcast Feeds for Danish Radio (DR) Bonanza Archive

This post is likely only interesting if you’re a Danish-speaker.

Podnanza is a screen-scraper and feed-generator that turns radio series from the Danish Radio Bonanza archive into podcast feeds for easy listening in your favorite Podcast app. I built Podnanza mostly for my own enjoyment because I wanted to listen to children’s radio-dramas edited and narrated by Carsten Overskov.

Here are some example feeds with links to the series’ pages on Bonanza:

By popular demand:

I’ve submitted one of the feeds to iTunes to see if Apple will list them (they might not). UPDATE: Apple published the podcasts and I’ve updated the links.

If the links are not working for some reason, here’s how to manually add the raw RSS feeds in the iOS Podcasts app:

All of Overskov’s shows were aired as part of DR’s “Children’s Radio” segments but as far as I remember, at least the Ivanhoe edit/re-telling was very graphic and raunchy (much more so than the “adult” original). I didn’t actually listen to the shows as a kid, but heard what must have been a re-airing of Ivanhoe in 1-hour segments on my first minimum-wage job out of high school, assembling door knobs. The shows were reportedly also very popular with long-haul truck drivers.

It’s funny to me that the progressive/left-leaning folks at DR (Carsten Overskov got into trouble for hollering “Advance, comrades—the microphone is with you!” while covering an anti Vietnam War demonstration in front of the American Embassy in Copenhagen in the ’60s) spent all this time retelling and recording Victorian era English novels, but who am I to complain? They’re the same novels my dad read aloud to me when I was a kid—King Solomon’s Mines is the first novel that I remember hearing.

Podnanza dynamically scrapes the Bonanza site to generate the Podcast feed, which is then cached for performance. If you find radio shows on Bonanza that you’d like to consume as a Podcast, simply find the identifier that DR uses for that particular series and stick it at the end of the Podnanza URL. For example, https://www.dr.dk/bonanza/serie/276/ivanhoe/ is the URL for Ivanhoe, so you want http://p.friism.com/p/276 for the Podnanza URL that you add to your podcast app.

Podnanza does support HTTPS but iTunes’ very dated list of trusted CA’s doesn’t include AWS (where Podnanza runs) so I’m just using HTTP urls for now. The code powering Podnanza is on GitHub in case you find bugs or want to help out.

Overskov died a couple of years ago but his work lives on in Danish Radio’s awesome online Bonanza archive. I hope Podnanza will help make consuming these shows easier, and that you’ll enjoy listening to them as much as I know I will.

MATE e-bike Review

Posted on | | 30 Comments on MATE e-bike Review

My girlfriend and I got a MATE e-bike last year. While far from perfect, it has proved a solid purchase and we’ve put more than 2500km on it. Read on for a review and details on some of the modifications I’ve made to our MATE.

We ordered a black 350W MATE S fairly early in the Indiegogo campaign in the fall of 2016. Including shipping I think the price came to around $1000. Production and shipping took a long time and the bike arrived in mid 2017.

Mate before additions

Quality

To be clear, the MATE is not a high quality e-bike. But it’s relatively well-designed, it’s well-appointed with disc brakes front and rear, front and rear suspension, a good battery and a powerful motor. It’s a good deal for the money in my opinion.

On our bike, the screw that holds hasp that secures the collapsing handlebar came out within the first couple days of use (I suspect it hadn’t been threaded in properly). This would cause the steering tube to collapse while riding (not a good thing). Zip-tying the hasp in place solved the problem, and I notified MATE hoping they’ll improve quality control on that component.

After a year of heavy use, 6 spokes on the rear wheel had snapped. Replacing them was not a big deal, but still annoying.

The pedals that come with the bike are of a collapsing type, but they tend to give a little when pedaling hard—not a confidence inspiring trait. I plan to replace them with some leftover regular pedals that I have lying around.

Weight

The MATE is heavy at 23,7 kg. I had hoped that my girlfriend would want to ride the bike to BART, collapse it, take it on the train and then put it back together to ride the final stretch to her office. This proved unrealistic: The bike is way too heavy to schlep around public transport. I sometimes ride the MATE to work and can only barely haul it up a flight of stairs on the way to the bike locker. No single component tips the scales: The battery is heavy, the rear wheel (with motor) is heavy, and the frame is heavy.

Before riding the MATE I thought the suspension (front telescoping fork, rear articulated sub-frame) was a gimmick. But taking into account the weight, the suspension does really help absorb bumps from holes in the road and going up or down curbs.

Bits and pieces

Other than lights and a lock, one of the first things I put on the handlebar was this phone mount. It has wrap-around elastic bands to secure the phone in place. The MATE comes with a USB port for phone charging under the handlebar. Combined with Bluetooth headphones, I can be in meetings while riding to work (with my phone charging), occasionally glancing down to see what’s going on with slides and presentations in the Hangout. I also installed a cup-holder (per girlfriend request), but that just got in the way and I’ve since removed it.

Cupholder and phone mount

Out of the box the MATE e-assist is speed limited to 30km/h. The governor is easily removed (you can Google for instructions) and then the 350W model will go about 40km/h on a flat, even road (40km/h is probably about as fast as you want to go by the way—my eyes start to water up at that speed, and it’s enough to more-or-less keep up with city traffic).

One funny problem is that even the smallest sprocket on the rear cassette is fairly large, and that means low gearing. I suspect the designers couldn’t put on smaller sprockets because they had to fit the motor and power cable through the hub. The result is that you can’t help the bike accelerate above around 30km/h because it’s not possible to pedal fast enough. To remedy this I switched in a bigger 58-tooth front chainwheel to get higher gearing, which helped a little bit.

We’ve gone through several sets of disc brake pads already. Part of the reason is probably the weight and speed of the bike, but I also find that my riding-pattern is different with a motor. Riding my non-e-bike, I carefully try to conserve momentum, don’t accelerate if there’s a stop coming up and generally try to brake as little as possible. Once I have a motor to help me along, all such caution goes out the window, and I just mash the accelerator whenever there’s a bit of open pavement in front of me. That also means way more stopping and more wear on the brakes.

The brake calipers and pads are some sort of off-brand type, and the MATE web-site doesn’t have info on getting replacement pads. I thought bb5 pads would work, but they don’t, so I ended up installing a new Avid BB7 caliper for the rear wheel. This also has the advantage of greater pad surface area which means more stopping power and less frequent pad changes.

Same as the brake pads, tires wear out pretty fast and I had lots of flats (maybe partly because the factory tires are not of the highest quality). As replacements, I installed heavy-duty German Schwalbe Marathon tires. They’re designed for e-bikes and good for up to 50km/h. I haven’t had a single flat since putting those on. They are heavy, however, and (in my experience) almost impossible to seat, even when applying lots of lubricating soap and elbow grease.

Carriers

I use the MATE to get groceries, so I wanted a good basket. Unfortunately, that can be hard to find for collapsible bikes with 20″ wheels. I ended up getting a cheap aluminium front rack that mounts on the fork and zip-tied an IKEA wicker basket to that. This worked OK, but the rack is flimsy and I already had one crumble on me.

The front of the bike now has a Wald 157 Giant Delivery Basket. This is a great product, but it required extensive modification to mount low over the wheel on the MATE: I had to shorten the wheel-hub poles and since it doesn’t sit up high on the handlebar it’s secured with a stay that connects to the front fork.

Since the basket is not hanging on the handlebar, the weight of the basket and its content would have been on the thin quick-change axle-skewer. Some of the weight is transferred to the actual fork when the skewer is tightened, of course, but it still seemed like a precarious setup. To help transfer load to the fork I remounted the carcass of the second cheap aluminum front rack and screwed and zip-tied it all together. The result is sturdy and doesn’t sway or wobble at all.

A Durban rack (UPDATE: This rack is no longer available from Amazon—searching for “bike rack 20 inch” or “folding bike rack” seems to surface similar-looking products) is mounted on the rear wheel and two Wald 582 collapsible baskets are suspended on the sides. These are perfectly sized to hold a grocery bag each. One thing you have to look out for when mounting a rack on the rear of the MATE is to either attach it only to the unsprung subframe (most practical option in my experience) or to the sprung frame. If you use mounting points on both, the suspension loads will be transferred to the rack instead of the shock and you’re going to have a bad day.

Loaded with groceries

Note that in my setup, both the front and rear racks are mounted on the unsprung suspension components. This is not ideal because it makes the suspension less effective and causes groceries to bounce around more. I don’t think there’s a good alternative for a compact bike like this, however, and it hasn’t proved a problem since I tend to ride carefully when fully loaded anyway. Also note that, even though the frame is heavy, it’s not as stiff as one could have hoped, and that shows when the bike is carrying a lot of weight.

The good part is that all the luggage ends up being mounted pretty low, which helps with stability.

Baskets and x-large sprocket

Summary

Overall, I think the MATE is a good deal and would recommend it for anyone looking for a simple and cheap e-bike to get around town. With a few additions, it can be turned into a great commuter bike and grocery hauler. Between the three Wald baskets, I can carry 5 full grocery bags and the motor means I don’t even break a sweat trucking them up San Francisco hills to get home.

ThinkPad W520: My (old) new computer

Posted on | | 2 Comments on ThinkPad W520: My (old) new computer

My home computer is a tricked-out 2011 ThinkPad W520. I find it to still be very fast and a joy to use. This post describes upgrades I’ve made to keep the machine relevant in 2018.

I got the W520 as my main work laptop in late 2011. In fact, you can see it in this 2012 AppHarbor housewarming invitation connected to two 23″ portrait-mode monitors. I spec’ed it with a quad-core/8-thread i7-2820QM 2.3/3.4GHz 8MB L3 processor, 8GB RAM, and two 320GB spinning disks configured for RAID 0 (You had to order with a RAID setup to get that option, and 320GB disks was the cheapest way to do that).

After taking delivery I switched out the disk drives for two 120GB Patriot Wildfire SSDs, also in RAID 0. The result was a 1GB/s read-write disk array which was good for 2011. At some later point I upgraded to 32GB RAM (4x8GB)—memory was much cheaper back then. I also added a cheap 120GB mSATA SSD for scratch storage (the mSATA slot is SATAII, so somewhat slower).

After leaving AppHarbor I used the W520 only sporadically for testing Windows Server pre-releases and running test VMs. Whenever I did, though, I found myself thinking “oh year, this is a really neat and fast machine, I should use it for something”. In 2017 I moved into condo and got to have a home office after 7 years of moving between assorted shared San Francisco houses and apartments. For my home PC, I wanted to see if I could make the W520 work.

My main requirement was a system that can power a 4K monitor running at 60hz. The W520 has a reasonably fast Nvidia discreet GPU and can render 4K just fine using remote desktop, but neither the VGA port nor the DisplayPort on the laptop can push out that many pixels to a plugged-in monitor.

Luckily the W520 has a bunch of expansion options that can potentially host a more modern graphics adapter:

  • 2xUSB 3.0 ports
  • Internal mini-PCIe slot used for WiFi adapter (the Lenovo BIOS whitelists approved devices however, so a hacked BIOS is required to plug in anything fun)
  • ExpressCard 2.0 slot

ExpressCard technology was on the way out in 2011 but had reached its zenith and in the W520 offers a direct interface to the system PCI Express bus. This avoids the overhead and extra latency of the USB protocol. An “eGPU” ExpressCard to PCIe adapter is required to plug in a real graphics card and I got the $50 EXP GDC.

I settled on a Nvidia GTX 1050ti graphics card since it’s reasonably fast and power efficient. Note that a 220W Dell power brick is also required to power the PCIe dock and graphics card.

UPDATE: This script from a user on the eGPU.io forums fixed the “Error 43” incompatibility for me.

Recent Nvidia driver versions have introduced an incompatibility with eGPU setups and I spent some time troubleshooting “Error 43” before getting output on the external screen. I never got recent drivers to work, but version 375.70 from 2016 is stable for me—implausibly since it predates and is not supposed to support the 1050-series GPU. The old driver has proven to be a problem only when trying to play very recent games, and is not a blocker (but do get in touch if you happen to have gotten a setup like this working with the latest Nvidia drivers). I also tried a Radeon RX 560. While it didn’t require old drivers to work, it had all sorts of other display-artifact problems that I didn’t feel like troubleshooting.

The standard ZOTAC fans are loud and I replaced them with a single 80mm Noctua fan mounted with zip-ties. The fan connector is not the right one but can be jammed onto the graphics card fan header and wedged under the cooler. I removed the W520 keyboard and zip-tied another USB powered 92mm fan to the CPU cooler so that the (louder) built-in fan doesn’t have to spin up as frequently (not in picture below).

The final upgrade was two 512GB Samsung Evo 860 SSDs that replaced the old 120GB Patriot ones to get me 1TB of local storage.

Computer on pegboard

The whole assemblage (gutted laptop festooned with adapters, the PCIe dock with graphics card, power bricks) is mounted on the wall under my desk. After much experimentation the components are now zip-tied to an IKEA SKÅDIS pegboard. The pegboard comes with stand-off wall mounts which allows some of the many cables to run behind the board. I put a small fridge magnet on the screen bezel where the “lid closed” sensor sits to keep the built-in LCD screen off.

I’m still very happy with the W520, and while I’m not sure getting the eGPU setup working was economical in terms of time invested (over just buying parts for a proper desktop), it was a fun project. To my amazement, it merrily runs slightly dated games like XCOM 2 and Homeworld 2 Remastered in full, glorious 4K.

Lenovo still supports the platform and recently released BIOS updates to address the Meltdown/Spectre exploits.

With the RAID 0 SSD disk system, the quad-core W520 still feels deadly fast and is a joy to use. It boots instantaneously and restoring a Chrome session with stupid many tabs (something I’m bad about) is quick. With 32GB of RAM I can run Docker for Windows and a handful of VMs without worry.

4K TV as PC monitor

Posted on | | 1 Comment on 4K TV as PC monitor

I use a 49″ 4K TV as my computer monitor both at home and at work. TVs are generally much cheaper than computer monitors of the same size. For my use of general productivity, programming and very occasional gaming, a mid-size 4K TV beats getting a couple of regular monitors on both price and result.

How to take advantage of a 49″ monitor? Windows 10 has simple window tiling functionality (“Snap Assist”), and you can use Win + arrow keys to quickly arrange windows in a 2×2 equally-sized grid. On a 49″ screen, each of those 4 grid elements are the size of ~25″ screens, which is generous for browser windows or 2-pane text editors. This 4 window setup is what I mostly use when I’m being lazy.

If I’m working on the same thing for a while, I use WindowGrid to divvy up my screen to fit more apps. I really wish Windows had a way to customize the snap grid pattern, but WindowGrid is a good alternative. 3 horizontal by 2 vertical is perfect because you can have two centered apps, the three columns are plenty wide and it’s still quick to shuffle windows around a 6-pane grid.

Home: Samsung UN49KS8500 Curved 49″

I first got this 2016 curved Samsung 49″ TV to use with my home computer. I bought it used for $766. It’s the best monitor setup I’ve ever had (I’ve previously used and enjoyed dual 23″ IPS panels, 27″ Apple Cinema Display and 27″ Dell 4k monitor). I use it on a deep (32″) floating desk that I built in my office. The depth of the desk combined with the slight screen curve makes 49″ the perfect size. I don’t have to move my head much to look from one side of the screen to the other, and the screen fills up my field of view. The bezel is almost non-existent and the stand is attractive and doesn’t take up much desk-space. Text renders crisply, the colors are beautiful and the picture is calm at 60Hz.

Samsung TV in my home office

Office: TCL 49S405 49″

When I re-joined Salesforce a couple of months ago, I asked to get a 27″ 4K monitor because I had liked that while working at Docker. Unfortunately the IT department decided I was not approved for such extravagance (that’s not a dig at Salesforce—in fact, you should come work with me). Since I was so happy with my home setup, I went ahead and approved myself for a $320 2017 vintage 49″ non-curved 4K TV which was more-or-less the cheapest 4k TV on Amazon at the time.

Receiving the giant box at the office was a surreal experience: It seems improbable that this much plastic, LCD, electronics, polystyrene and cardboard can be put together and schlepped to a 3rd floor San Francisco office so cheaply. I suspect TCL is an Elon-Musk-like organization, only instead of obsessively working backwards from the cost of kerosene and aircraft-grade aluminium to determine how cheaply one ton of satellite can be put into orbit, they do the same for plastic and liquid crystals to get cheap TVs into people’s living rooms.

With the TV assembled on my work desk, I realized that this setup was not going to be as awesome as my home configuration:

  • My work desk is narrower, so I sit closer to the screen
  • With no curve (and sitting so close) the sides of the screen begin to suffer from fading because of the oblique viewing angle
  • While not bad, the TV screen panel is just not as good as the Samsung unit

I’ve partly addressed the two first problems by building and mounting desk extenders, and after much calibration and fiddling, I managed to get an OK picture out of the TCL TV. Even so, I’d definitely recommend limiting yourself to 43″ unless you have a big desk and/or you can get a curved TV. I had in fact planned to get the 43″ TCL model, but it’s only $20 cheaper so I made the mistake of springing for 49″.

Desk extenders for the corp office desk

Summary

Overall I’d heartily recommend getting a 4K TV for use as a monitor if you have the space. I can’t think of a setup I’d prefer over my current home office TV: dual 27″ screens have a bezel-seam down the middle and I have way more screen real-estate than any of the 34″ wide-screen monitors available. 4K has so many pixels that, even when spread out over 49″ and sitting just 2-3′ away, graininess is not really an issue. Another advantage is that TVs come with passable speakers—I listen to music piped over HDMI from my computer at home, and it’s fine (if not exactly amazing).

49″ is the largest TV that makes sense in my experience, and depending on your desk and ergonomics, 43″ is probably a better choice. When choosing a TV model, be sure to get one that supports chroma 4:4:4 (my $320 one does). Otherwise the TV will sub-sample the image and text will look smudged.

One final word of caution: If you sit in an open office (like me), expect to spend on average 15 minutes every day explaining random passersby why there’s a TV on your desk.

ASP.NET 5 Docker language stack with Kestrel

Posted on | | 1 Comment on ASP.NET 5 Docker language stack with Kestrel

This blog post presents a Docker Language Stack for creating and running ASP.NET 5 (née vNext) apps. It’s based on my work last week to run ASP.NET 5 on Google Container Engine.

I the interim, the ASP.NET team has released their own Docker image. It’s not really up to spec for being a Docker language stack though, so I forked it, added what was missing and published it on Docker Hub.

Other people already sent PRs to add onbuild support to the ASP.NET repo, but there’s apparently some uncertainty about how ASP.NET 5 apps are going to get built, so they’re holding off on merging. I hope that eventually the work presented here will get folded into the official repo, just like it happened with the Mono stack I created a month ago. That’s the base for what’s now the official Mono Docker language stack, which, incidentally, is what the ASP.NET docker image derives from!

How to use

Using the onbuild image is pretty simple. To run HelloWeb sample, clone that repo and add this Dockerfile in the HelloWeb dir, next to the project.json:

FROM friism/aspnet:1.0.0-beta1-onbuild
EXPOSE 5004

Now build the image:

docker build -t my-app .

And finally run the app, exposing the site on port 80 on your local machine:

docker run -t -p 80:5004 my-app

Note that the -t option is currently required when invoking docker run. This is because there’s some sort of bug in Kestrel that requires the process to have a functional tty to write to – without a tty, Kestrel hangs on start.

Google Container Engine for Dummies

Posted on | | 2 Comments on Google Container Engine for Dummies

Last week, Google launched an alpha version of a new product called Google Container Engine (GKE). It’s a service that runs pre-packaged Docker images for you: You tell GKE about images you want to run (typically ones you’ve put in the Docker Registry, although there’s a also a hack to run private images) and how many instances you need. GKE will spin them up and make sure the right number is running at any given time.

The GKE Getting Started guide is long and complicated and has more JSON than you shake a stick at. I suspect that’s because the product is still alpha, and I hope the Google guys will improve both the CLI and web UIs. Anyway, below is a simpler guide showing how to stand up a stateless web site with just one Docker image type. I’m also including some analysis at the end of this post.

I’m using a Mono/ASP.NET vNext Docker image, but all you need to know is that it’s an image that exposes port 5004 and serves HTTP requests on that port. There’s nothing significant about port 5004 – if you want to try with an image that uses a different port, simply substitute as appropriate.

In the interest of brevity, the description below skips over many details. If you want more depth, then remember that GKE is Kubernetes-as-a-Service and check out the Kubernetes documentation and design docs.

Setup

  1. Go to the Google Developer Console and create a new project
  2. For that project, head into the “APIs” panel and make sure you have the “Google Container Engine API” enabled
  3. In the “Compute” menu section, select “Container Engine” and create yourself a new “Cluster”. A cluster size of 1 and a small instance is fine for testing. This guide assumes cluster name “test-cluster” and region “us-central1-a”.
  4. Install the CLI  and run gcloud config set project PROJECT_ID (PROJECT_ID is from step 1)

Running raw Pod

The simplest (and not recommended) way to get something up and running is to start a Pod and connect to it directly with HTTP. This is roughly equivalent to starting an AWS EC2 instance and connecting to its external IP.

First step is to create a JSON-file somewhere on your system, let’s call it pod.json:

{
  "id": "web",
  "kind": "Pod",
  "apiVersion": "v1beta1",
  "desiredState": {
    "manifest": {
      "version": "v1beta2",
      "containers": [
        {
          "name": "web",
          "image": "friism/aspnet-web-sample-web",
          "ports": [
            { "containerPort": 5004, "hostPort": 80 }
          ]
        }
      ]
    }
  },
  "labels": {
    "name": "web"
  }
}

What you should care about is the Docker image/repository getting run (friism/aspnet-web-sample-web) and the port mapping (the equivalent of docker run -p 80:5004). With that, we can tell GKE to start a pod for us:

$ gcloud preview container pods --cluster-name test-cluster --zone us-central1-a \
    create web --config-file=/path/to/pod.json
...
ID                  Image(s)                       Host                Labels              Status
----------          ----------                     ----------          ----------          ----------
web                 friism/aspnet-web-sample-web   <unassigned>        name=web            Waiting

All the stuff before “create” is boilerplate and the rest is saying that we’re requesting a pod named “web” as specified in the JSON file.

Pods take a while to get going, probably because the Docker image has to be downloaded from Docker Hub. While it’s starting (and after), you can SSH into the instance that’s running your pod to see how it’s doing, eg. by running sudo docker ps. This is the SSH incantation:

$ gcloud compute ssh --zone us-central1-a k8s-test-cluster-node-1

The instances are named k8s-<cluster-name>-node-1 and you can see them listed in the Web UI or with gcloud compute instances list. Wait for the pod to change status to “Running”:

$ gcloud preview container pods --cluster-name test-cluster --zone us-central1-a list
ID                  Image(s)                       Host                              Labels              Status
----------          ----------                     ----------                        ----------          ----------
web                 friism/aspnet-web-sample-web   k8s-<..>.internal/146.148.66.67   name=web            Running

The final step is to open up for HTTP traffic to the Pod. This setting is available in the Web UI for the instance (eg. k8s-test-cluster-node-1). Also check that the network settings for the instance allow for TCP traffic on port 80.


And with that, your site should be responding on the external ephemeral IP address of the host running the pod.

As mentioned in the introduction, this is not a production setup. The Kubernetes service running the pod will do process management and restart Docker containers that die for any reason (to test this, try ssh’ing into your instance and docker-kill the container that’s running your site – a new one will quickly pop up). But your site will go down in case there’s a problem with the pod, for example. Read on for details on how to extend the setup to cover that failure mode.

Adding Replication Controller and Service

In this section, we’re going to get rid of the pod-only setup above and replace with a replication controller and a service fronted by a loadbalancer. If you’ve been following along, delete the pod created above to start with a clean slate (you can also start with a fresh cluster).

First step is to create a replication controller. You tell a replication controller what and how many pods you want running, and the controller then tries to make sure the correct formation is running at any given time. Here’s controller.json for our simple use case:

{
  "id": "web",
  "kind": "ReplicationController",
  "apiVersion": "v1beta1",
  "desiredState": {
    "replicas": 1,
    "replicaSelector": {"name": "web"},
    "podTemplate": {
      "desiredState": {
         "manifest": {
           "version": "v1beta1",
           "id": "frontendController",
           "containers": [{
             "name": "web",
             "image": "friism/aspnet-web-sample-mvc",
             "ports": [{"containerPort": 5004, "hostPort": 80 }]
           }]
         }
       },
      "labels": { "name": "web" }
      }},
  "labels": {"name": "web"}
}

Notice how it’s similar to the pod configuration, except we’re specifying how many pod replicas the controller should try to have running. Create the controller:

$ gcloud preview container replicationcontrollers --cluster-name test-cluster \
    create --zone us-central1-a --config-file /path/to/controller.json
...
ID                  Image(s)                       Selector            Replicas
----------          ----------                     ----------          ----------
web                 friism/aspnet-web-sample-mvc   name=web            1

You can now query and see the controller spinning up the pods you requested. As above, this might take a while.

Now, let’s get a GKE service going. While individual pods come and go, services are permanent and define how pods of a specific kind can be accessed. Here’s service.json that’ll define how to access the pods that our controller is running:

{
  "id": "myapp",
  "selector": {
    "app": "web"
  },
  "containerPort": 80,
  "protocol": "TCP",
  "port": 80,
  "createExternalLoadBalancer": true
}

The important parts are selector which specifies that this service is about the pods labelled web above, and createExternalLoadBalancer which gets us a loadbalancer that we can use to access our site (instead of accessing the raw ephemeral node IP). Create the service:

$ gcloud preview container services --cluster-name test-cluster--zone us-central1-a create --config-file=/path/to/service.json
...
ID                  Labels              Selector            Port
----------          ----------          ----------          ----------
myapp                                   app=web             80

At this point, you can go find your loadbalancer IP in the Web UI, it’s under Compute Engine -> Network load balancing. To actually see my site, I still had to tick the “Enable HTTP traffic” boxes for the Compute Engine node running the pod – I’m unsure whether that’s a bug or me being impatient. The loadbalancer IP is permanent and you can safely create DNS records and such pointing to it.

That’s it! Our stateless web app is now running on Google Container Engine. I don’t think the default Bootstrap ASP.NET MVC template has ever been such a welcome sight.

Analysis

Google Container Engine is still in alpha, so one shouldn’t draw any conclusions about the end-product yet (also note that I work for Heroku and we’re in the same space). Below are a few notes though.

Google Container Engine is “Kubernetes-as-a-Service”, and Kubernetes is currently exposed without any filter. Kubernetes is designed based on Google’s experience running containers at scale, and it may be that Kubernetes is (or is going to be) the best way to do that. It also has a huge mental model however – just look at all the stuff we had to do to launch and run a simple stateless web app. And while the abstractions (pods, replication controllers, services) may make sense for the operator of a fleet of containers, I don’t think they map well to the mental model of a developer just wanting to run code or Docker containers.

Also, even with all the work we did above, we’re not actually left with a managed and resilient capital-S Service. What Google did for us when the cluster was created, was simply to spin up a set of machines running Kubernetes. It’s still on you to make sure Kubernetes is running smoothly on those machines. As an example, a GKE cluster currently only has one Master node. This is the Kubernetes control plane node that accepts API input and schedules pods on the GCE instances that are Kubernetes minions. As far as I can determine, if that node dies, then pods will no longer get scheduled and re-started on your cluster. I suspect Google will add options for more fault-tolerant setups in the future, but it’s going to be interesting to see what operator-responsibility the consumer of GKE will have to take on vs. what Google will operate for you as a Service.

Mono Docker language stack

Posted on | | Leave a Comment on Mono Docker language stack

I couple weeks ago, Docker announced official pre-built Docker images for a bunch of popular programming languages. Each stack generally consists of two Dockerfiles: a base Dockerfile that installs system dependencies required for that language to run, and an onbuild Dockerfile that uses ONBUILD instructions to transform app source code into a runnable Docker image. As an example of the latter, the Ruby onbuild Dockerfile runs bundle install to install libraries specified in an app’s Gemfile.

Managing system dependencies and composing apps from source code is very similar to what we do with Stacks and Buildpacks at Heroku. To better understand the Docker approach, I created a language stack for Mono, the open source implementation of Microsoft’s .NET Framework.

UPDATE: There’s now a proper official Docker/Mono language stack, I recommend using that.

How to use

A working Docker installation is required for this section.

To turn a .NET app into a runnable Docker image, first add a Dockerfile to your app source root. The sample below assumes a simple console app with an output executable name of TestingConsoleApp.exe:

FROM friism/mono:3.10.0-onbuild
CMD [ "mono", "./TestingConsoleApp.exe" ]

Now build the image:

docker build -t my-app .

The friism/mono images are available in the public Docker Registry and your Docker client will fetch them from there. Docker will then execute the onbuild instructions to restore NuGet packages required by the app and use xbuild (the Mono equivalent of msbuild) to compile source code into executables and libraries.

The Docker image with your app is now ready to run:

docker run my-app

If you don’t have an app to test with, you can experiment with this console test app.

Notes

The way Docker languages stacks are split into a base image (that declares system dependencies) and an onbuild Dockerfile (that composes the actual app to be run) is perfect. It allows each language to get just the system libraries and dependencies needed. In contrast, Heroku has only one stack image (in several versions, reflecting underlying Linux distribution versions) that all language buildpacks share. That stack is at once both too thick and too thin: It includes a broad assortment of libraries to make supported languages work, but most buildpack maintainers still have to hand-build dependencies and vendor in the binaries when apps are built.

Docker has no notion of a cache for ONBUILD commands whereas the Heroku buildpack API has a cache interface. No caching makes the Docker stack maintainer’s life easier, but it also makes builds much slower than what’s possible on Heroku. For example, Heroku buildpacks can cache the result of running bundle install (in the case of Ruby) or nuget restore (for Mono), greatly speeding up builds after the first one.

Versioning is another interesting difference. Heroku buildpacks bake support for all supported language versions into a single monolithic release. What language version to use is generally specified by the app being built in a requirements.txt (or similar) file and the buildpack uses that to install the correct packages.

Docker language stacks, on the other hand, support versioning with version tags. The app chooses what stack version to use with the FROM instruction in the Dockerfile that’s added to the app. Stack versions map to versions of the underlying language or framework (eg. FROM python:3-onbuild gets you Python 3). This approach lets the Python stack, for example, compile Python 2 and 3 apps in different ways without having a bunch of branching logic in the onbuild Dockerfile. On the other hand, pushing an update to all Python stack versions becomes more work because the tags have to be updated individually. There are tradeoffs in both the Docker and Heroku buildpack approaches, I don’t know which is best.

Docker maintains a free, automated build service that churns out hosted Docker images for everyone to use. For my Mono stack, Docker Hub pulls updates from the GitHub repo with the Dockerfiles and builds the relevant tags into images. This is very convenient for stack maintainers. Heroku has no hosted service for building buildpack binaries, although I have documented a (Docker-based) approach to scripting this work.

(Note that, while Heroku buildpacks are wildly successful, it’s an older standard that predates Docker by many years. If it seems like Docker has gotten more things right, it’s probably because that project was informed by Heroku’s experience and by the passage of time).

Finally, and unrelated to Docker and Heroku, the Mono Project now has an APT package repository. This is pretty awesome, and I sincerely hope that the days of having to compile Mono from source are behind us. I don’t know if the repo is quite stable yet (I had to download a key without using SSL, the mono-devel package is versioned 3.10.0-0xamarin1 and the package fails to declare a dependency on udev), but it made the Mono Docker stack image a lot simpler. Check out the diff going from 3.8.0 (compiled from source) to 3.10.0 (installed from APT repo).

Older Posts Newer Posts