My name is Alexander Miller, and this website is where I keep notes about my projects and research.
As a kid I always enjoyed working with tools, electronics, and computers. During high school I read a couple textbooks about microprocessor system architecture(1, 2), and eventually worked by way through the basic systems course of the Elenco Micro-Master training kit (book).
I attended Virginia Tech sporadically between 2002 and 2013 and eventually earned a Bachelor of Science in Computer Engineering.
During college I started working for a small agency called Modea, where I wore the hats of sysadmin, netops, devops, and backend engineer over the course of 7 years. We parted ways shortly after I relocated from Blacksburg to Durham. Eventually they spun out a division as Ozmo, and both companies are doing great to this day. I highly recommend checking them out if you're in their target market, or if you're looking for a tech job in Blacksburg, Virginia.
While seeking experience in new fields I received an offer from Hortonworks, a primary developer of the Hadoop platform. The position was in support, but I saw it as an opportunity to get a crash course in real-world distributed systems, and I'm glad I took it. The environment was fast-paced and sometimes difficult, but I was able to exchange knowledge and techniques with talented colleagues.
I got to dive deep into many areas of distributed systems including filesystems, scheduling, and security, plus a fair amount about applications running on the platform, such as datastores, workflows, and batch/interactive/stream processing. My Linux knowledge of Linux and Kerberos On the soft-skills side I learned quite a bit, considering this was my first role with frequent customer interaction. It was interesting to work on a huge variety of clusters ranging in size from 8 to 4000 nodes, along with meeting the admins who managed them.
Internally I started a few voluntary projects, including tools for lab cluster creation and for support case analysis, which led to creating a support tooling team. This provided a small opportunity for some development projects, but I really wanted to return to engineering. When I realized the path within the company would be a long one, I started researching startups to join.
After interviewing full-time in the Bay Area for two months, I decided to join Endless Computers. Their product is a Linux distribution intended for emerging markets, where connectivity can be slow/intermittent and users might have little to no prior experience with computers. This resulted in some interesting qualities such as an immutable filesystem (using OSTree) with apps distributed as Flatpak, and an offline-first approach. The full install actually contains the majority of Wikipedia! (text-only, but still...)
The entire Endless organization is top-notch. Many employees come from other prominent open source companies such as Red Hat, Canonical, and Collabora. My areas of responsibility were split between internal tooling and data engineering. For internal tooling I was tasked with creating an API server for cataloging the multitude of disk images from each build. This was my first time using Flask, and the first time I used python for more than a simple script. I couldn't ask for a more helpful or patient mentor. On the data engineering side, I was responsible for an analytics system along with another former Hortonworks colleague. He handled the analytics and visualizations, and I automated setup/management of the platform.
Currently my time is split between consulting and self-directed R&D. See the contact page for details.