I just completed Startup Engineering, a Stanford MOOC offered through Coursera. There are two broad themes in the course: one that is technical and another that is philosophical.
Although the course assumes limited technical knowledge, it would be difficult for a person without at least moderate technical skills to complete this portion with any depth of understanding. A brief summary of topics covered demonstrates this:
Any one of those topics could be a lengthy course on its own and, even though the course is designed more for exposure, the depth of the subject matter can be overwhelming. But for those who persevered the reward was a strong foundation along with reference materials (discussed later under Reference Material) to revisit.
Beyond technical knowledge the course offers practical direction on how to think about and execute startup concepts. Two things make this work exceptionally well: first, the depth provided in written lectures (discussed later under Reference Material), and second the instructor, Balaji Srinivasan whose track record and experiences provide the gravitas that separates this course material from the type of people who post inspirational startup drivel with hopes of getting posted to Hacker News.
Perhaps the most fascinating thing about the philosophy arc of the course comes from its origin: it was initially offered by Peter Thiel as CS183 at Stanford. Thiel is well known as an avowed libertarian, and Srinivasan, beyond describing the course as a “spiritual sequel” to its initial offering, continues along these themes providing credible evidence in support of this political and economic philosophy. One lecture in particular, concerning regulation, makes a devastating case against how interventions impede innovation and business.
Most MOOC offerings involve video lectures with some sparse reference materials. The videos are well produced and it’s easy to watch, pause, and rewind. Startup Engineering was vastly different: all of the lectures were written in long form and although there were videos Srinivasan essentially skims through the written material. Some people wanted more screen time for the instructor but this was one of the best aspects of the course for me; I ended up with a book of lecture material on my tablet to consume at the speed of my designation. The lectures are dense: I just counted 130 external links/references in the first provided lecture alone. The lectures that involve tasks performed on AWS were straightforward and methodical. I cannot emphasize enough how valuable these were and I will probably use them to start some of the more technical tasks again from scratch to uncover what I might have missed. One last thing that deserves mention are the guest lectures delivered by founders of some more well known startups. It was the proverbial icing on the cake.
I would recommend Startup Engineering to anyone interested in either starting a technology company or to those who simply want a broad overview of the technologies and engineering of web applications. Although the course is officially past its timeframe it’s still available for sign up as I’m writing this blog entry. I would recommend waiting for the next offering and participating in the competition among students for crowd sourcing a business idea.
 I’m not “libertarian” myself although I do think all of the ideas and arguments presented in the course should be taken seriously. This wasn’t the crazy uncle on Medicare and Social Security (irony intended) venting personal frustrations on “The Government,” it was well thought out, evidence-based reasoning promoting libertarian thinking.
I’ve been enjoying my current class, Startup Engineering, from Coursera. My own fascination for startup culture preceded the “Dot Com” era; I always admired people like Nolan Bushnell, Steve Jobs, and Bill Gates. But I came of age during “The Bubble” and hearing all of the sensational stories made me excited with what I might be able to become a part of as a young technologist. Even after the party ended and it seemed so obvious that most startup emperors had no clothes, I was still under the spell of people like Paul Graham whose essays and work at YCombinator have influenced so many of us in our thinking about building businesses.
One of the key points made by the Startup Engineering professor, Balaji Srinivasan, is that there is a difference between a “small business” and a “startup.” The major difference has to do with the size of the market. According to the lecture notes: “Startups Must Exhibit Economies of Scale.”
When I think about the companies I admire I realize that many don’t have enormous markets in a traditional sense. I find myself admiring companies like Wolfram Research for products like Wolfram Alpha or a gaming company like Paradox Interactive for publishing niche titles well known for their sophistication and obsessive community rather than mass market appeal. Both, if I’m not mistaken, are privately held which means that while neither is “small,” neither has to kowtow to short term demands of shareholders.
Although I’m in learning mode at present I hope one day to build my own business selling software. It is, after all, the reason I took the course. But from a philosophical and temperamental side, this has helped me clarify my interests in something smaller than a conventional startup. I would rather take on an ambitious goal in a small market than making a conventional concept “mainstream.” Worse yet, I wouldn’t want to build a sand castle product that is just designed to get a buck rather than make an impact.
 The First Quarter : A 25-year History of Video Games is a great history of early gaming. It was later edited and republished as The Ultimate History of Video Games: From Pong to Pokemon--The Story Behind the Craze That Touched Our Lives and Changed the World but I've only personally read the former.
 Folklore.org is a great resource for stories around the early days of Apple and the building of the original Macintosh.
 Steve Jobs, Bill Gates, and many others are chronicled in Robert Cringely’s excellent book Accidental Empires: How the Boys of Silicon Valley Make Their Millions, Battle Foreign Competition, and Still Can't Get a Date.
 According to Graham, the valuation of companies incubated at YCombinator as of June 2013 was $11.7 billion.
 By this I mean a lot of people. I’m aware of companies with a smaller market that achieve their revenues with higher prices.
 Like Johan Andersson’s: “to create believable worlds.”
I never thought I’d find myself saying it but my next computer is going to be a classic desktop machine; a “tower” as some of the gray bearded veterans of the “Build Your Own PC” era used to say. Although some people have preferred this type of machine my last 15 years (20 if you want to throw in college) have been spent on laptops. It’s been a practical decision: as a student I needed mobility to work in the library or in a class room, as a newly minted professional I had jobs that involved a lot of travel. And I liked working in coffee shops.
Moving to a desktop feels in many ways like moving backward. As everyone and everything seems to get more “mobile first,” as there is more and more competition to build a smaller laptop with more battery life, it would seem like the action of a misty eyed classicist; the kind of guy who restores cars or listens to music from the 60s for the look and feel of a bygone era.
But it’s the future that has moved me backward. Last year after much agonizing I bought myself a Nexus 7 and the effect was that it became more and more of a rare need to move my laptop. The things I did – checking email, reading internets, watching screencasts – all not only were available on my tablet, but they were in many ways better.
As I come up on my 3 or so year upgrade cycle I’ve realized that in terms of bang for your buck, building a desktop machine is not only more economical, it comes with more power. Power is something I’ve always wanted but as I find myself more and more drawn to a “virtualize everything” approach to operating systems, it’s something I find myself needing. In a perfect world I run a virtualized OS for a work machine, another one for personal use, and several experimental VMs for running prerelease software and Linux. These are all things I do with my laptop but there are better, cheaper options for optimizing CPU and memory on a desktop machine.
The one element of my life that is perhaps a tipping point is that I no longer travel for work. This is the one element that might have continued the appeal for the powerful laptop I could move around with but as a remote worker who needs his home office for better bandwidth and privacy than could be achieved at a coffee house or library, it’s a chapter closed. Even with the most robust tablets out there, it seems like any serious undertaking involves a more traditional laptop or desktop computer. When my out of the ordinary travel or mobile scenarios show up I’ll still have one of my old laptops to fall back upon.
I have to wonder how many people like me still buy laptops by force of habit but wind up doing most “mobile” tasks on a tablet or a phone. I also wonder if, once chained to a desktop, I’ll start to discover all sorts of scenarios when it would have been convenient to have a laptop instead.
 Fewer distractions on a single purpose screen, apps like Instapaper, being able to multitask with real life (watching my toddler).
 Ubuntu for the most part although I realized a big reason I wasn’t doing any Windows 8 development was because I didn’t have the time or inclination to try to merge my heavily Windows 7 life with something new. It would have been easier to learn WinRT in parallel on a VM.
Are Vim and Emacs as powerful as their legend would have it or do the “lesser” developers, the “Morts” of the world, spend less time focused on learning their tools at the expense of learning their problem domain?
I read an old blog post from Brian Carper about learning Emacs recently:
Emacs isn't difficult to learn. Not in the sense of requiring skill or cleverness. It is however extremely painful to learn. I think there's a difference.
The key word is tedium. Learning Emacs is a long process of rote memorization and repetition of commands until they become muscle memory. If you're smart enough to write programs, you can learn Emacs. You just have to keep dumping time into the task until you become comfortable.
I’m willing to assume Brian is a clever guy; the fact that he writes books about Clojure is like a dog whistle for intelligent programmers. How long does it take?
Well, it took me over a year to be able to sit down at Emacs and use it fluidly for long periods of time without tripping over the editor.
So picking up on this specific time frame here is my hypothesis: perhaps it’s not something so special about the capabilities of an Emacs or a Vim – perhaps it’s this timeframe that makes for the types of productivity associated with these tools rather than the tools themselves?
A different way to think about it is to think of how many people using Visual Studio know more than a handful of the many shortcuts? How many a Mort really uses the Command Window to automate or interact with the tool? It’s always fun and enlightening to watch a person like Scott Hanselman using Visual Studio (how many talks have you been to when you picked up some keyboard shortcut that was your biggest takeaway?). It’s even more of an eye opener to see what is possible with the power of extensions such as Mads Kristensen’s Web Essentials.
Some closing thoughts on this open ended assertion [to mitigate the forthcoming beat down in comments?]:
I will be learning a little Emacs as a part of a MOOC I’m enrolled in, Startup Engineering. I hope to write a follow up post but if you do find my reasoning faulty for a lack of Emacs or Vim experience, what do you recommend I look for in the painful process of getting going?
Although I have used Visual Studio on a nearly daily basis for a long time now. I have not, however, spent more than a few minutes in a concerted effort to learn keyboard shortcuts or add-in programming. Any recommendations on getting better with the IDE at hand are welcome.
When Visual Studio is too large of a hammer, I use Notepad++. If you have any recommendations about Notepad++ or any other text editors (yes, I know Sublime is very popular these days) I’d love to catalog it from the comments.
In the wake of the death of Google Reader I was planning to write an ode to RSS and how important it is for the web but Dieter Bohn has done the job in his article on The Verge, “Why RSS still matters.”
In short I’m hopeful because something that valuable and so passionately used won’t just disappear. This represents an opportunity for all of us who consider ourselves innovators to build something to fill the gap without Google Reader. At present I’m kicking the tires of NewsBlur but here are some things I’d love to see in the more evolved RSS clients people will start building:
1. Tools for long form reading – similar to Instapaper which I use for deferred but eventual reading.
2. Shared content from feeds, perhaps also syndicated with annotations. Take the social network out of the walled garden and make it open.
3. Tools for curating and annotating syndicated content.
4. Tools for feeds with specialized content like video blogs or podcasts.
5. Integrated Groupware: things like long form chat and the ability to post comments in an environment that makes them visible to some group of people. The big trick is to avoid the walled in social network; we have too many of those.
Any other ideas?
There’s An App For That
Thousands of apps, native and web based, litter the consumer facing stores of big platform vendors, each with some story about how they will make your life better. Even if your problem domain is something as obscure as surviving an earthquake, someone has written a simple app with the singular purpose of keeping you alive.
What’s interesting to me in a world proliferated by apps is how we are conditioned now to look for boxes to fill, buttons to push, and single domain user interfaces in order to get things done. Whether it’s on the web or a mobile device, this is the paradigm we seem to find everywhere.
Lego and Clay
Many moons ago, when Silverlight was still fresh and bright in the eyes of the Microsoft developer crowd, I attended a talk given by Rick Barraza on animation. He was attempting to walk those of us newer to Silverlight through his thought process and how he developed some of the showcase pieces of software that his then employer, Cynergy, had developed for Microsoft. As he showed attendees his process of manually animating an effect of falling snow, he discussed a paradigm analogous to working with clay and contrasted it with how many of us in the Microsoft sphere have become accustomed to controls and user interface building blocks to put things together, a more Lego oriented perspective on software development. From an interview:
There are two types of personalities in this space. Those that like solving problems using the Lego method: snapping predictable and scalable objects together. And those who like solving problems with clay: loose and messy, but with a high level of customization and intricacy in the finished product. The tension between those two camps and their evolution, that cross pollination of ideas and techniques, should create some compelling experiences.
I will admit I was one of those developers who was uneasy and frustrated by the lack of snap in controls. Even though things got better as Silverlight evolved, the idea of an evolving framework of prebuilt building blocks was something I had a hard time letting go of. Even if the outcome was battleship gray, I could build things quickly and without the type of detail and ownership I would encounter by making everything from scratch.
Apps Are Legos
Apps fit the Lego mindset – they build a layer of abstraction on top of the task at hand, hoping to “automate” some of the structure around it. The goal of app designers (besides the money) is that this abstraction makes for a “solved problem” – that people would see effort on their part as reinventing the wheel.
For example, if you have a newborn, one of the things they have you do in the first few days is track when the baby eats and when it poops. Since this problem predates the dawn of the mobile era people have usually kept a notebook with one column where you wrote the activity, one where you wrote the time and one column with “notes” for anything extra. These days, however, there are apps to simplify that with push buttons that indicate the activity, automatically recording the details. Why bother with the notepad, having to find a pen, or your inconsistent notations that the nurse has a hard time reading? Just push a button!
If you are happy living in an abstraction this makes perfect sense. Most app design doesn’t anticipate too much around a problem and focuses on a specific task. Some of the more subtle and clever designs steer users in a particular direction or eliminate things the app designer finds unnecessary. It all works great as long as there is no edge case or unexpected element within the problem domain. Continuing with the idea of newborns our imaginary app that tracked eat, sleep, and poop would work well until the doctor noticed something and asked you to note something special along with the regular activity (e.g. every time little Jonny’s poop is green, write out how long it was since he had eaten!).
For those who dislike snap-in pieces or want something completely unique, the blank piece of paper offers utmost freedom to solve problems as diverse as taking care of a newborn to planning a writing schedule. In a digitized environment clay comes primarily in the form of text editing or the slightly more evolved spreadsheets.
I had a friend who worked for a defense contractor a while back who, although he couldn’t tell me much about his day job, alluded to being an engineer on a team designing missile defense systems. When I asked what kind of tools they used for something like that, thinking of something very “advanced” and esoteric, he just replied in earnest: a lot of Excel. At the time I thought he was being discrete but these days I don’t have a hard time believing it especially if he had to do a lot of mathematical models. I’ve also started to pay attention to the world of finance and the spreadsheet is the lingua franca of that world whether you’re doing an asset allocation model or some form of quantitative analysis.
It takes no stretch of the imagination to visualize a spreadsheet that tracks events for a newborn. But because a spreadsheet is like clay designing it would involve some messiness – and automating things would require a bit more effort, the type which would send most people back to the Lego jar.
The sculpting of this spreadsheet would also become a reflection of the sculptor. It could be a cringe worthy effort of repetition, manual entry, and strange conventions or it could be something singular: a small, elegant, and beautiful reflection of how they chose to solve the problem.
As with most things it is tempting to be lured into a false dichotomy of Lego versus Clay. It seems as though so many other facets of life are polluted with characterizations of polar opposites where one has to pick a side. Artists and designers argue about form versus function. Our politics are polluted with the arguments that are liberal or conservative. Personality types are misconstrued as either extroverted or introverted. Even my petty world of comic book collecting has the perennial arguments of Marvel versus DC (Marvel of course).
Rather than picking a side or calling one bad, I think it’s more constructive to do a self diagnosis of one’s predispositions and then try to find a point at which the two approaches can inform each other. I’ve seen amazing work from those that operate with Lego; from Alice Finch’s real world recreation of Hogwarts Castle to the type of apps that make the heart sing. At the same time, it’s hard to deny the beauty and power of sculpture; who wouldn’t stop for Cordier’s Bust of an African Woman?
My observation is that for those who are oriented toward clay, the danger lies in being overwrought or too singular; something that is molded so endlessly that it looses its sense of purpose or something that is so unique to a circumstance that it is never again useful. For those oriented toward Lego the danger seems to come in living too much inside of an abstraction and becoming unable to see around it.
It Never Occurred To Me
In general, it seems as though as a general culture our focus is driven by a mix of commercial forces and laziness to be driven toward the Lego mindset; to always look for the “right” prepackaged software that meets our needs. I know I’m a Lego person, especially when it comes to simple problems that are common. But recently I’ve been thinking about how little I actually know about Excel, and how so many app oriented problems would be trivial with a spreadsheet. For example, I used a 37Signals app for manipulating a “To Do” list for my reading. How difficult would this be in Excel? Why did it never occur to me to simply make a spreadsheet?
In the question of balance, it is important that when decisions are made, it’s with a cognizance of alternatives – choosing from the different paths that can be taken. I have personally been too App focused, too dependent on prepackaged solutions whether it’s To Do list software or project management. What would happen if I started trying to solve these problems with clay? What would happen if, rather than color by number, I began to paint?
Apps are useful. So too is the mindset behind them; I’ve learned that most things that become mainstream do so not because people are sheep but because they are designed well enough to accommodate needs. It is also wasteful to focus on our own implementations to solved problems when this serves as a distraction to a larger problem set that we are trying to solve; back to babies, I know that when my children were born my focus was on spending time with them, helping my wife, getting sleep, and maintaining my sanity. Creating a perfect notebook or spreadsheet to track the baby’s activity was less than a passing thought in my mind for the right reasons.
But there are things that warrant the attention. Things that I do every day, things that are core to my job and profession that ought to become candidates for a conscious choice of an out of the box solution or something that is customized and flexible. The pre-made software may do the trick but at this point it seems to me that the Lego oriented sphere of my thinking should inform the part that tends toward Clay, the part of my process where I think about how I would solve the problem myself if it became necessary.
Joel Spolsky wrote a long time ago about most of the time it’s not a good idea to reinvent the wheel but that it’s something you should do when you are operating on the core of your business, platform, or goal. This is debatable and I’m sure any reader would have an opinion (I’ve seen attempts at this become poor, feature-starved shadows of what is open and freely available) but on the question of choosing a full featured software or App that solves our problems by putting us into someone else’s abstraction versus the sweat equity of building something ourselves and really thinking through our needs, I think it’s worth the type of consideration that makes it a deliberate choice with an awareness of the benefits and limitations that we impose upon ourselves. Despite the generality that none of us is a unique and beautiful snowflake we all do come with our own edge cases.
First and foremost the full disclosure: I’m new to Git. Although I’d used Github for a few of my errant thoughts/projects, it was for the most part just copying commands without a deep understanding of the benefits of distributed source control.
Today a light clicked on when I was finally able to set up a Dropbox repository and share it out with some collaborators. First, here are the steps I followed, adapted from an old post by Roger Stringer.
Setting Up Your Remote Repository On DropBox
1. Get Dropbox, create your folder. (If you don’t have a Dropbox account, please use this affiliate link which will give me more space).
2. Create a “bare” repository there:
- Start Git Bash in the folder and use the following commands:
git init –-bare
- Once you’ve created your empty repository you’ll see the repository metadata which is ok.
Setting Up Your Local Repository, Pushing To Dropbox
Now that you have your DropBox folder set up to be a remote repository, you can set up your working folder. I already have a Visual Studio project with some files in it so I just started a Git Bash in that folder.
1. Double check that you’ve got a .gitignore file that is configured for Visual Studio or whatever project files you’d like to ignore.
2. Create the local git repository with the following:
git init .
3. Now add all your files to be tracked by the local repository with the following:
git add --all
4. Commit the changes you made by adding all those files:
git commit –m “Added initial files for git tracking”
5. Now set up your remote repository as the previously created “bare” repository in DropBox. Here is the small divergence from the original article since Windows is the file system:
git remote add origin /c/users/curufin/dropbox/gitbox
In the above:
- origin represents the name of the remote repository
- In the local path, curufin is my username on windows and gitbox is the name of the folder I made.
6. Push your files to the remote dropbox repository:
git push origin master
Collaborators and Others Wishing to Connect to Your Repository
The point behind using Dropbox is to have a synced space for your remote repository but without hosting and some of the other frictions or sharing elsewhere. Here are the steps to get your collaborators onboard:
1. Share your Dropbox folder with them. They will need Dropbox accounts, of course.
2. Once the folder is shared and propagated, they will need to make an empty repository in the folder from which they plan to work. The command, reiterated from above is:
git init .
3. The next step is to point to the remote repository. The “remote” repository is really the synced Dropbox folder and the idea is that Dropbox will do the heavy lifting in terms of keeping the folders synced.
git clone /c/users/celebrimbor/dropbox/gitbox
In the above:
- In the local path, celebrimbor is the username on windows.
4. The next step is to do a pull from the remote repository.
git pull origin
5. At this point they should be able to inspect the project files in their local folder and work on them using git to manage their versions.
The Future Is Distributed
I opened the article by admitting I am new to Git. In terms of serious projects, I would only consider the last month or so as legitimate experience. I’ve also struggled, as a person relatively experienced with Subversion, with the gestalt of Git.
However what the above shows is how useful it is to generate repositories that live in local or remote file systems and how much flexibility there is in being able to take advantage of existing protocols and structures that do the sharing on our behalf. Not only is that remote repository so easily accessible, it’s logically separate from my local repository to which I can make changes and commit to my heart’s content. There are probably other niceties but the last positive I noticed was that Git was tracking my changes with my default user/email configuration. There is a way to reset this but it makes it easy to differentiate my changes from any collaborators. If you add on the ability to do in place branching this is quite a powerful environment.
One final note: of course the above is probably not a great idea for a full fledged project (or is it? maybe someone with experience can comment… ). The project I work on is taking advantage of Github and especially after using their tools to hunt down (blame button!) some issues I fully appreciate their business model. One other plug is for my personal project hosting provider, ProjectLocker. They host Git repositories and after 2+ years with them as a primarily Subversion customer, I’m happy. I’ve started my first Git repository and the process is straightforward.
... it wasn't magic, it was hard work, thoughtful design, and constant iteration.
Glenn Reid, who worked on iMovie and iPhoto recollects.
One of the difficulties I have in writing blog posts is feeling as though I lack the ability to say something conclusive, some wit or wisdom that can wrap up my thoughts in a clever, prosaic bow. Perhaps it was all those years of papers that needed to be written with an intro, body, and conclusion.
But on the question of Next Big Language (in a personal sense) I admit I’ve been tossed to and fro over the years in my attempts to improve as a programmer. Many people I admire recommend trying to learn a new programming language each year. I’ve pursued this model or something like it, using the aspirational punch of New Year’s day to resolve to take up something new. As the year develops I move from basic syntax to contrived problems to some form of personal project. By the end of the year I’m starting to lurk within the community, reading the more heavily trafficked blogs and checking on events and conferences (never going of course). Given my propensity to look for heroes, I’ll seek out the more vocal thought leaders, all of whom have used said language and availed themselves of the community… for years.
The irony is that in my attempt to improve as a programmer I reach the fledgling stages of using a new language and then abandon it; the year ends and I’m fatigued by overcoming the really hard transition from understanding syntax to understanding idioms and techniques that make the language really different.
My team is switching to Git so last week I dug up the infamous Linus Torvalds talk at Google on the story and thinking behind it as a source code management system. In between calling users of CVS (include yourselves, users of Subversion) who disagree with him “stupid and ugly” there was a little gem in there: that he thought he could write his own system, better than CVS, in a couple of weeks. It kind of blew my mind but after thinking about it a little it seems (whether it took two weeks or longer) that Torvalds succeeded because he had defined a good problem and conceived a solution for it. It wasn’t that he used a special language or tool, it was because he had a good problem set.
This year I am going to try something different. Instead of seeking out the new in my toolset, I’m going to turn back to what I know and use on a daily basis. After more than a decade, it’s easy to fall into old habits and use time pressure as an excuse for ignoring newer, better techniques that exist today. Rather than gaining my novelty from a shiny new language, I’m going to try to shift my focus to problems: finding tough ones, coming up with good solutions and being persistent about getting them solved. Some problems are solved quickly (and oh how so many of us like to brag how quickly we got something done) but a lot of them take a long time on the order of years of accretion. And out of the years of accretion and persistence real expertise is born, the kind where you can make a meaningful presence with something to offer in the programming community to which you belong. Or so I would hope: this is my present thinking.
I wish I had some conclusion, some form of wisdom on learning new language sand becoming a better programmer. Instead I am limited to anecdote, experience, and personal experiments. This is the 2013 version where my resolutions as a programmer are directed at problems before solutions.
 Homage to Steve Yegge who wrote a lot about the Next Big Language.
 Some might discount the fact that Linus was really writing his own version of a distributed version control system modeled on Bitkeeper – hence it wasn’t really that big of a deal. This brings to mind a Picasso quote:
“Good artists copy, great artists steal.”
Porting an idea is a good problem to work on and even if Git was inspired, it does have meaningful differences as this old email from Linus indicates. I can only imagine how different they are now, 7 years later.
 Although this should be self evident, how about the language of C# itself? Very useful from the get go but trying to revisit code from the past without generics and LINQ is quite painful. It bears mention that there is a balance of a solution getting better and then when the solution begins to get bloated from unnecessary features aka “creeping featuritis.”
This is a personal account in The Tablet Wars. Mine is not the voice of the powerful insider; I don’t represent any company and have no major stakes from which my opinion will yield major benefit. If you want professional reviews I would recommend The Verge. Instead imagine me as a common soldier of the American civil war… a certain David Snow, the baseborn son of a wandering freed slave and Sioux woman who took up arms in a Minnesota regiment based out of Fort Snelling with the hopes that my personal journal of the war would be meaningful to me at some point in the future when all the emotions of war were distant even if the fog never lifted.
Ruminations of War in Dakota Territory
My interest in tablets began with the first announcement of the iPad. I recognized that these devices presented a new form factor, a new way of experiencing the web that to date had not existed. My thinking was influenced by reading people like Donald Norman from whom I grudgingly understood that the clunky multi-purpose “computer” would be replaced by devices that had computing power but much more specific in their goals. Early tablets were fascinating but too new and expensive to have any appeal.
Arrival at Fort Snelling, First Encounters
While it may not count as a tablet, my first handheld was a Kindle DX. You would be hard pressed to find someone who loved their Kindle the way I loved mine; it a perfect fit for my primary use case of reading. I chose the larger Kindle DX because it would enable me to purchase digital copies of technical books and fit each page to the screen. After having it I made the quick realization that the Kindle’s MOBI format was the best reading experience, despite the screen size. This turned out alright since the two publishers I tended to buy from, O’Reilly and Manning, release their eBooks in multiple formats including MOBI (Kindle format). Another key benefit of the Kindle was the ability to read newspapers and magazines while skipping all the clutter of owning physical copies. It’s incredible how fast that can build up, especially since I spend a lot of time reading what’s in print. Combined with my packrat sensibilities, it made for a permanent mess in my home office .
I had a chance to get involved first hand with The Tablet Wars last year after I got a unique opportunity to develop a web based application targeting mobile devices. The company provided me with both an iPad 2 and Galaxy Tab to use as part of the effort. The project was fun and challenging but my use of the tablet devices was limited to testing our application and getting stock quotes. The iPad 2, with a combination of a better browsing experience and more mature app ecosystem, fully eclipsed the Galaxy Tab in my usage.
It became clear to me that while the iPad 2 always had the “whiz! bang! swipe! color!” showiness that could impress onlookers (for about 30 seconds), I preferred my Kindle DX for reading, especially anything long form – my skeuomorph bit is definitely off . The information flow on the Kindle worked better for me as well: content from blogs and other subscriptions was pushed to it on a periodic basis so I spent most of my time just focused on reading. The browsing and app centric model of the iPad 2 gave me a tendency to hunt and peck a lot, clicking through links or grazing from one app before switching to something else. A disciplined user may not have this problem though I am skeptical because the device lends itself to that mentality.
Shipping Out with the 68th Regiment of USCT
The introduction of the Google Nexus and the rumors at the time of a Microsoft Tablet pulled me completely into the war since I realized that I needed a little more interaction (e.g. email, calendar) than my Kindle could give me in a portable tablet. It made a lot of sense to start the process of saving  to get a ramp up for a decision around the time Microsoft released something. As a programmer of things primarily Microsoft, it seemed to make sense that theirs was the ecosystem that would make the best fit for me. But I was not ready to commit because there were many other wildcards: I love the Amazon ecosystem and wanted to keep what I had purchased over the years of having my Kindle DX as well as continue to have the ability to buy from their vast selection of books . This made me interested in the Kindle Fire HD. Add to this all of the positive reviews of the Google Nexus products and I was in a state of indecision. The one thing that became steadily clear was that I wasn’t interested in the Apple ecosystem for one simple reason: I hate iTunes with the heat of a thousand suns .
Battles, Victory at Appomattox
After several months of saving and reading reviews, the decisive moment was the introduction of the Microsoft Surface. I wish it was victory for the Surface but what made for the decisive moment was instead a realization that the device was not for me. This did not come down to technical specifications or the lack of apps, as many have been eager to point out. I, and many others like me, were simply priced out of the Microsoft ecosystem. The introductory price along with a keyboard would have run for more than $600, as steep if not steeper than a brand new iPad 3. I also recognized that the introductory RT devices were running a different version of Windows altogether and this would generate compatibility problems. Finally, the limitation of installing software only through the Microsoft store gave me a similarly distrustful intuition about things, similar to iTunes.
I haven’t given up on Windows 8, or a future in which I run a Microsoft operating system on a tablet device. But at present, the cost of these devices makes them fit into the category of “laptop replacement” rather than tablet. I’ve been ogling the Lenovo Yoga and anticipate owning one sometime before the end of next year . When I’ve got the money saved I’ll probably shop for something that can fake the tablet experience but that also has enough muscle to help with my day to day computing.
In the end, after realizing I wouldn’t pick up a Surface, I decided to purchase a Galaxy Nexus 7. The decision between that and the Kindle Fire HD wasn’t easy, but in the end it had a slight advantage for a tinker like me where the Kindle Fire HD is a great device for consuming content, running apps, and living in the Amazon ecosystem. The price point, $199, was good as well for a device I intended to use as a corollary rather than a replacement to my digital life with the computer.
Reclusion to Northern Minnesota, Final Thoughts
One thing I’ve learned over the years as a programmer (and as a human?) is that I’m a walking edge case. My thought process and decisions make a lot of sense for me because of my personal use cases but because I think and go about things differently than most people, I can never say that what works for me applies to others. A teenage boy who is really into Michael Bay films and video games, for example, might be better off with a PS Vita than heeding the advice of a man with a toddler and gray hair like me . With that said, here is a summary of what I do on my tablet and why the Nexus 7 was a great fit for the money I spent:
1. Email, Calendar, Reader, other Google Stuff
The Google ecosystem is top notch and their apps on the Nexus 7 are first class.
2. Web Browsing
Chrome on Nexus is excellent.
3. Financial Information
I use Bloomberg and CNBC apps. The CNBC Realtime app on iPad is much more robust so I’ve shifted more to Bloomberg on the Nexus. Truth be told Twitter is the best financial app – search $SYMBOL e.g. $DDD
TuneIn Radio Pro, Pandora, Amazon Music, Google Play all do the trick. TuneIn Radio is a great way to catch my old home station, KCRW and explore international radio stations.
Works great, especially since I’m many leagues from my immediate family.
Pretty decent, on par with the iPad native app
I have come to love Falcon Pro more than all my other twitter clients on any platform (Tweetdeck, Metrotwit, etc)
Great for consuming documents. The app allows for recording audio so I usually speak a quick note I want to make to myself. I’ve also started to leverage the Evernote Web Clipper in lieu of Instapaper for some web pages, especially where content format is important (e.g. sample code)
Not quite as good as watching on my PC but still a great option especially when I need to be away from the computer (e.g. watching a toddler)
10. Amazon Kindle
Where I read virtually anything from my library.
 There is still a mess. Technology doesn’t solve genetic/familial predispositions.
 I guess you could say I’m more of a minimalist or utilitarian. Animated page turns, fake leather calendars, “book shelves” all represent noise. Think Rococo versus Bauhaus.
 If you are working on saving toward a specific goal, try out Smarty Pig as a way of helping you put the money away.
 It’s not just the well known authors and best sellers. Some of the most interesting reads I’ve gotten from Amazon are the cheaper, self-published eBooks. Books like ANESthetized, which I would never find in a store.
 There are many reasons for this but let’s start with how bloated the software has become. I do love my iPod though so I’ve been using Media Monkey. Price was another factor though the iPad Mini didn’t have the technical specs to compete with the Nexus.
 It would be nice to see a variant that let me have at least 8GB of RAM though I think 16GB will be a requirement for my next machine.
 I should disclose that my wife has an iPad 2 that is as perfect a fit for her as the Nexus is for me.