In the wake of the death of Google Reader I was planning to write an ode to RSS and how important it is for the web but Dieter Bohn has done the job in his article on The Verge, “Why RSS still matters.”
In short I’m hopeful because something that valuable and so passionately used won’t just disappear. This represents an opportunity for all of us who consider ourselves innovators to build something to fill the gap without Google Reader. At present I’m kicking the tires of NewsBlur but here are some things I’d love to see in the more evolved RSS clients people will start building:
1. Tools for long form reading – similar to Instapaper which I use for deferred but eventual reading.
2. Shared content from feeds, perhaps also syndicated with annotations. Take the social network out of the walled garden and make it open.
3. Tools for curating and annotating syndicated content.
4. Tools for feeds with specialized content like video blogs or podcasts.
5. Integrated Groupware: things like long form chat and the ability to post comments in an environment that makes them visible to some group of people. The big trick is to avoid the walled in social network; we have too many of those.
Any other ideas?
There’s An App For That
Thousands of apps, native and web based, litter the consumer facing stores of big platform vendors, each with some story about how they will make your life better. Even if your problem domain is something as obscure as surviving an earthquake, someone has written a simple app with the singular purpose of keeping you alive.
What’s interesting to me in a world proliferated by apps is how we are conditioned now to look for boxes to fill, buttons to push, and single domain user interfaces in order to get things done. Whether it’s on the web or a mobile device, this is the paradigm we seem to find everywhere.
Lego and Clay
Many moons ago, when Silverlight was still fresh and bright in the eyes of the Microsoft developer crowd, I attended a talk given by Rick Barraza on animation. He was attempting to walk those of us newer to Silverlight through his thought process and how he developed some of the showcase pieces of software that his then employer, Cynergy, had developed for Microsoft. As he showed attendees his process of manually animating an effect of falling snow, he discussed a paradigm analogous to working with clay and contrasted it with how many of us in the Microsoft sphere have become accustomed to controls and user interface building blocks to put things together, a more Lego oriented perspective on software development. From an interview:
There are two types of personalities in this space. Those that like solving problems using the Lego method: snapping predictable and scalable objects together. And those who like solving problems with clay: loose and messy, but with a high level of customization and intricacy in the finished product. The tension between those two camps and their evolution, that cross pollination of ideas and techniques, should create some compelling experiences.
I will admit I was one of those developers who was uneasy and frustrated by the lack of snap in controls. Even though things got better as Silverlight evolved, the idea of an evolving framework of prebuilt building blocks was something I had a hard time letting go of. Even if the outcome was battleship gray, I could build things quickly and without the type of detail and ownership I would encounter by making everything from scratch.
Apps Are Legos
Apps fit the Lego mindset – they build a layer of abstraction on top of the task at hand, hoping to “automate” some of the structure around it. The goal of app designers (besides the money) is that this abstraction makes for a “solved problem” – that people would see effort on their part as reinventing the wheel.
For example, if you have a newborn, one of the things they have you do in the first few days is track when the baby eats and when it poops. Since this problem predates the dawn of the mobile era people have usually kept a notebook with one column where you wrote the activity, one where you wrote the time and one column with “notes” for anything extra. These days, however, there are apps to simplify that with push buttons that indicate the activity, automatically recording the details. Why bother with the notepad, having to find a pen, or your inconsistent notations that the nurse has a hard time reading? Just push a button!
If you are happy living in an abstraction this makes perfect sense. Most app design doesn’t anticipate too much around a problem and focuses on a specific task. Some of the more subtle and clever designs steer users in a particular direction or eliminate things the app designer finds unnecessary. It all works great as long as there is no edge case or unexpected element within the problem domain. Continuing with the idea of newborns our imaginary app that tracked eat, sleep, and poop would work well until the doctor noticed something and asked you to note something special along with the regular activity (e.g. every time little Jonny’s poop is green, write out how long it was since he had eaten!).
For those who dislike snap-in pieces or want something completely unique, the blank piece of paper offers utmost freedom to solve problems as diverse as taking care of a newborn to planning a writing schedule. In a digitized environment clay comes primarily in the form of text editing or the slightly more evolved spreadsheets.
I had a friend who worked for a defense contractor a while back who, although he couldn’t tell me much about his day job, alluded to being an engineer on a team designing missile defense systems. When I asked what kind of tools they used for something like that, thinking of something very “advanced” and esoteric, he just replied in earnest: a lot of Excel. At the time I thought he was being discrete but these days I don’t have a hard time believing it especially if he had to do a lot of mathematical models. I’ve also started to pay attention to the world of finance and the spreadsheet is the lingua franca of that world whether you’re doing an asset allocation model or some form of quantitative analysis.
It takes no stretch of the imagination to visualize a spreadsheet that tracks events for a newborn. But because a spreadsheet is like clay designing it would involve some messiness – and automating things would require a bit more effort, the type which would send most people back to the Lego jar.
The sculpting of this spreadsheet would also become a reflection of the sculptor. It could be a cringe worthy effort of repetition, manual entry, and strange conventions or it could be something singular: a small, elegant, and beautiful reflection of how they chose to solve the problem.
As with most things it is tempting to be lured into a false dichotomy of Lego versus Clay. It seems as though so many other facets of life are polluted with characterizations of polar opposites where one has to pick a side. Artists and designers argue about form versus function. Our politics are polluted with the arguments that are liberal or conservative. Personality types are misconstrued as either extroverted or introverted. Even my petty world of comic book collecting has the perennial arguments of Marvel versus DC (Marvel of course).
Rather than picking a side or calling one bad, I think it’s more constructive to do a self diagnosis of one’s predispositions and then try to find a point at which the two approaches can inform each other. I’ve seen amazing work from those that operate with Lego; from Alice Finch’s real world recreation of Hogwarts Castle to the type of apps that make the heart sing. At the same time, it’s hard to deny the beauty and power of sculpture; who wouldn’t stop for Cordier’s Bust of an African Woman?
My observation is that for those who are oriented toward clay, the danger lies in being overwrought or too singular; something that is molded so endlessly that it looses its sense of purpose or something that is so unique to a circumstance that it is never again useful. For those oriented toward Lego the danger seems to come in living too much inside of an abstraction and becoming unable to see around it.
It Never Occurred To Me
In general, it seems as though as a general culture our focus is driven by a mix of commercial forces and laziness to be driven toward the Lego mindset; to always look for the “right” prepackaged software that meets our needs. I know I’m a Lego person, especially when it comes to simple problems that are common. But recently I’ve been thinking about how little I actually know about Excel, and how so many app oriented problems would be trivial with a spreadsheet. For example, I used a 37Signals app for manipulating a “To Do” list for my reading. How difficult would this be in Excel? Why did it never occur to me to simply make a spreadsheet?
In the question of balance, it is important that when decisions are made, it’s with a cognizance of alternatives – choosing from the different paths that can be taken. I have personally been too App focused, too dependent on prepackaged solutions whether it’s To Do list software or project management. What would happen if I started trying to solve these problems with clay? What would happen if, rather than color by number, I began to paint?
Apps are useful. So too is the mindset behind them; I’ve learned that most things that become mainstream do so not because people are sheep but because they are designed well enough to accommodate needs. It is also wasteful to focus on our own implementations to solved problems when this serves as a distraction to a larger problem set that we are trying to solve; back to babies, I know that when my children were born my focus was on spending time with them, helping my wife, getting sleep, and maintaining my sanity. Creating a perfect notebook or spreadsheet to track the baby’s activity was less than a passing thought in my mind for the right reasons.
But there are things that warrant the attention. Things that I do every day, things that are core to my job and profession that ought to become candidates for a conscious choice of an out of the box solution or something that is customized and flexible. The pre-made software may do the trick but at this point it seems to me that the Lego oriented sphere of my thinking should inform the part that tends toward Clay, the part of my process where I think about how I would solve the problem myself if it became necessary.
Joel Spolsky wrote a long time ago about most of the time it’s not a good idea to reinvent the wheel but that it’s something you should do when you are operating on the core of your business, platform, or goal. This is debatable and I’m sure any reader would have an opinion (I’ve seen attempts at this become poor, feature-starved shadows of what is open and freely available) but on the question of choosing a full featured software or App that solves our problems by putting us into someone else’s abstraction versus the sweat equity of building something ourselves and really thinking through our needs, I think it’s worth the type of consideration that makes it a deliberate choice with an awareness of the benefits and limitations that we impose upon ourselves. Despite the generality that none of us is a unique and beautiful snowflake we all do come with our own edge cases.
First and foremost the full disclosure: I’m new to Git. Although I’d used Github for a few of my errant thoughts/projects, it was for the most part just copying commands without a deep understanding of the benefits of distributed source control.
Today a light clicked on when I was finally able to set up a Dropbox repository and share it out with some collaborators. First, here are the steps I followed, adapted from an old post by Roger Stringer.
Setting Up Your Remote Repository On DropBox
1. Get Dropbox, create your folder. (If you don’t have a Dropbox account, please use this affiliate link which will give me more space).
2. Create a “bare” repository there:
- Start Git Bash in the folder and use the following commands:
git init –-bare
- Once you’ve created your empty repository you’ll see the repository metadata which is ok.
Setting Up Your Local Repository, Pushing To Dropbox
Now that you have your DropBox folder set up to be a remote repository, you can set up your working folder. I already have a Visual Studio project with some files in it so I just started a Git Bash in that folder.
1. Double check that you’ve got a .gitignore file that is configured for Visual Studio or whatever project files you’d like to ignore.
2. Create the local git repository with the following:
git init .
3. Now add all your files to be tracked by the local repository with the following:
git add --all
4. Commit the changes you made by adding all those files:
git commit –m “Added initial files for git tracking”
5. Now set up your remote repository as the previously created “bare” repository in DropBox. Here is the small divergence from the original article since Windows is the file system:
git remote add origin /c/users/curufin/dropbox/gitbox
In the above:
- origin represents the name of the remote repository
- In the local path, curufin is my username on windows and gitbox is the name of the folder I made.
6. Push your files to the remote dropbox repository:
git push origin master
Collaborators and Others Wishing to Connect to Your Repository
The point behind using Dropbox is to have a synced space for your remote repository but without hosting and some of the other frictions or sharing elsewhere. Here are the steps to get your collaborators onboard:
1. Share your Dropbox folder with them. They will need Dropbox accounts, of course.
2. Once the folder is shared and propagated, they will need to make an empty repository in the folder from which they plan to work. The command, reiterated from above is:
git init .
3. The next step is to point to the remote repository. The “remote” repository is really the synced Dropbox folder and the idea is that Dropbox will do the heavy lifting in terms of keeping the folders synced.
git clone /c/users/celebrimbor/dropbox/gitbox
In the above:
- In the local path, celebrimbor is the username on windows.
4. The next step is to do a pull from the remote repository.
git pull origin
5. At this point they should be able to inspect the project files in their local folder and work on them using git to manage their versions.
The Future Is Distributed
I opened the article by admitting I am new to Git. In terms of serious projects, I would only consider the last month or so as legitimate experience. I’ve also struggled, as a person relatively experienced with Subversion, with the gestalt of Git.
However what the above shows is how useful it is to generate repositories that live in local or remote file systems and how much flexibility there is in being able to take advantage of existing protocols and structures that do the sharing on our behalf. Not only is that remote repository so easily accessible, it’s logically separate from my local repository to which I can make changes and commit to my heart’s content. There are probably other niceties but the last positive I noticed was that Git was tracking my changes with my default user/email configuration. There is a way to reset this but it makes it easy to differentiate my changes from any collaborators. If you add on the ability to do in place branching this is quite a powerful environment.
One final note: of course the above is probably not a great idea for a full fledged project (or is it? maybe someone with experience can comment… ). The project I work on is taking advantage of Github and especially after using their tools to hunt down (blame button!) some issues I fully appreciate their business model. One other plug is for my personal project hosting provider, ProjectLocker. They host Git repositories and after 2+ years with them as a primarily Subversion customer, I’m happy. I’ve started my first Git repository and the process is straightforward.
... it wasn't magic, it was hard work, thoughtful design, and constant iteration.
Glenn Reid, who worked on iMovie and iPhoto recollects.
One of the difficulties I have in writing blog posts is feeling as though I lack the ability to say something conclusive, some wit or wisdom that can wrap up my thoughts in a clever, prosaic bow. Perhaps it was all those years of papers that needed to be written with an intro, body, and conclusion.
But on the question of Next Big Language (in a personal sense) I admit I’ve been tossed to and fro over the years in my attempts to improve as a programmer. Many people I admire recommend trying to learn a new programming language each year. I’ve pursued this model or something like it, using the aspirational punch of New Year’s day to resolve to take up something new. As the year develops I move from basic syntax to contrived problems to some form of personal project. By the end of the year I’m starting to lurk within the community, reading the more heavily trafficked blogs and checking on events and conferences (never going of course). Given my propensity to look for heroes, I’ll seek out the more vocal thought leaders, all of whom have used said language and availed themselves of the community… for years.
The irony is that in my attempt to improve as a programmer I reach the fledgling stages of using a new language and then abandon it; the year ends and I’m fatigued by overcoming the really hard transition from understanding syntax to understanding idioms and techniques that make the language really different.
My team is switching to Git so last week I dug up the infamous Linus Torvalds talk at Google on the story and thinking behind it as a source code management system. In between calling users of CVS (include yourselves, users of Subversion) who disagree with him “stupid and ugly” there was a little gem in there: that he thought he could write his own system, better than CVS, in a couple of weeks. It kind of blew my mind but after thinking about it a little it seems (whether it took two weeks or longer) that Torvalds succeeded because he had defined a good problem and conceived a solution for it. It wasn’t that he used a special language or tool, it was because he had a good problem set.
This year I am going to try something different. Instead of seeking out the new in my toolset, I’m going to turn back to what I know and use on a daily basis. After more than a decade, it’s easy to fall into old habits and use time pressure as an excuse for ignoring newer, better techniques that exist today. Rather than gaining my novelty from a shiny new language, I’m going to try to shift my focus to problems: finding tough ones, coming up with good solutions and being persistent about getting them solved. Some problems are solved quickly (and oh how so many of us like to brag how quickly we got something done) but a lot of them take a long time on the order of years of accretion. And out of the years of accretion and persistence real expertise is born, the kind where you can make a meaningful presence with something to offer in the programming community to which you belong. Or so I would hope: this is my present thinking.
I wish I had some conclusion, some form of wisdom on learning new language sand becoming a better programmer. Instead I am limited to anecdote, experience, and personal experiments. This is the 2013 version where my resolutions as a programmer are directed at problems before solutions.
 Homage to Steve Yegge who wrote a lot about the Next Big Language.
 Some might discount the fact that Linus was really writing his own version of a distributed version control system modeled on Bitkeeper – hence it wasn’t really that big of a deal. This brings to mind a Picasso quote:
“Good artists copy, great artists steal.”
Porting an idea is a good problem to work on and even if Git was inspired, it does have meaningful differences as this old email from Linus indicates. I can only imagine how different they are now, 7 years later.
 Although this should be self evident, how about the language of C# itself? Very useful from the get go but trying to revisit code from the past without generics and LINQ is quite painful. It bears mention that there is a balance of a solution getting better and then when the solution begins to get bloated from unnecessary features aka “creeping featuritis.”
This is a personal account in The Tablet Wars. Mine is not the voice of the powerful insider; I don’t represent any company and have no major stakes from which my opinion will yield major benefit. If you want professional reviews I would recommend The Verge. Instead imagine me as a common soldier of the American civil war… a certain David Snow, the baseborn son of a wandering freed slave and Sioux woman who took up arms in a Minnesota regiment based out of Fort Snelling with the hopes that my personal journal of the war would be meaningful to me at some point in the future when all the emotions of war were distant even if the fog never lifted.
Ruminations of War in Dakota Territory
My interest in tablets began with the first announcement of the iPad. I recognized that these devices presented a new form factor, a new way of experiencing the web that to date had not existed. My thinking was influenced by reading people like Donald Norman from whom I grudgingly understood that the clunky multi-purpose “computer” would be replaced by devices that had computing power but much more specific in their goals. Early tablets were fascinating but too new and expensive to have any appeal.
Arrival at Fort Snelling, First Encounters
While it may not count as a tablet, my first handheld was a Kindle DX. You would be hard pressed to find someone who loved their Kindle the way I loved mine; it a perfect fit for my primary use case of reading. I chose the larger Kindle DX because it would enable me to purchase digital copies of technical books and fit each page to the screen. After having it I made the quick realization that the Kindle’s MOBI format was the best reading experience, despite the screen size. This turned out alright since the two publishers I tended to buy from, O’Reilly and Manning, release their eBooks in multiple formats including MOBI (Kindle format). Another key benefit of the Kindle was the ability to read newspapers and magazines while skipping all the clutter of owning physical copies. It’s incredible how fast that can build up, especially since I spend a lot of time reading what’s in print. Combined with my packrat sensibilities, it made for a permanent mess in my home office .
I had a chance to get involved first hand with The Tablet Wars last year after I got a unique opportunity to develop a web based application targeting mobile devices. The company provided me with both an iPad 2 and Galaxy Tab to use as part of the effort. The project was fun and challenging but my use of the tablet devices was limited to testing our application and getting stock quotes. The iPad 2, with a combination of a better browsing experience and more mature app ecosystem, fully eclipsed the Galaxy Tab in my usage.
It became clear to me that while the iPad 2 always had the “whiz! bang! swipe! color!” showiness that could impress onlookers (for about 30 seconds), I preferred my Kindle DX for reading, especially anything long form – my skeuomorph bit is definitely off . The information flow on the Kindle worked better for me as well: content from blogs and other subscriptions was pushed to it on a periodic basis so I spent most of my time just focused on reading. The browsing and app centric model of the iPad 2 gave me a tendency to hunt and peck a lot, clicking through links or grazing from one app before switching to something else. A disciplined user may not have this problem though I am skeptical because the device lends itself to that mentality.
Shipping Out with the 68th Regiment of USCT
The introduction of the Google Nexus and the rumors at the time of a Microsoft Tablet pulled me completely into the war since I realized that I needed a little more interaction (e.g. email, calendar) than my Kindle could give me in a portable tablet. It made a lot of sense to start the process of saving  to get a ramp up for a decision around the time Microsoft released something. As a programmer of things primarily Microsoft, it seemed to make sense that theirs was the ecosystem that would make the best fit for me. But I was not ready to commit because there were many other wildcards: I love the Amazon ecosystem and wanted to keep what I had purchased over the years of having my Kindle DX as well as continue to have the ability to buy from their vast selection of books . This made me interested in the Kindle Fire HD. Add to this all of the positive reviews of the Google Nexus products and I was in a state of indecision. The one thing that became steadily clear was that I wasn’t interested in the Apple ecosystem for one simple reason: I hate iTunes with the heat of a thousand suns .
Battles, Victory at Appomattox
After several months of saving and reading reviews, the decisive moment was the introduction of the Microsoft Surface. I wish it was victory for the Surface but what made for the decisive moment was instead a realization that the device was not for me. This did not come down to technical specifications or the lack of apps, as many have been eager to point out. I, and many others like me, were simply priced out of the Microsoft ecosystem. The introductory price along with a keyboard would have run for more than $600, as steep if not steeper than a brand new iPad 3. I also recognized that the introductory RT devices were running a different version of Windows altogether and this would generate compatibility problems. Finally, the limitation of installing software only through the Microsoft store gave me a similarly distrustful intuition about things, similar to iTunes.
I haven’t given up on Windows 8, or a future in which I run a Microsoft operating system on a tablet device. But at present, the cost of these devices makes them fit into the category of “laptop replacement” rather than tablet. I’ve been ogling the Lenovo Yoga and anticipate owning one sometime before the end of next year . When I’ve got the money saved I’ll probably shop for something that can fake the tablet experience but that also has enough muscle to help with my day to day computing.
In the end, after realizing I wouldn’t pick up a Surface, I decided to purchase a Galaxy Nexus 7. The decision between that and the Kindle Fire HD wasn’t easy, but in the end it had a slight advantage for a tinker like me where the Kindle Fire HD is a great device for consuming content, running apps, and living in the Amazon ecosystem. The price point, $199, was good as well for a device I intended to use as a corollary rather than a replacement to my digital life with the computer.
Reclusion to Northern Minnesota, Final Thoughts
One thing I’ve learned over the years as a programmer (and as a human?) is that I’m a walking edge case. My thought process and decisions make a lot of sense for me because of my personal use cases but because I think and go about things differently than most people, I can never say that what works for me applies to others. A teenage boy who is really into Michael Bay films and video games, for example, might be better off with a PS Vita than heeding the advice of a man with a toddler and gray hair like me . With that said, here is a summary of what I do on my tablet and why the Nexus 7 was a great fit for the money I spent:
1. Email, Calendar, Reader, other Google Stuff
The Google ecosystem is top notch and their apps on the Nexus 7 are first class.
2. Web Browsing
Chrome on Nexus is excellent.
3. Financial Information
I use Bloomberg and CNBC apps. The CNBC Realtime app on iPad is much more robust so I’ve shifted more to Bloomberg on the Nexus. Truth be told Twitter is the best financial app – search $SYMBOL e.g. $DDD
TuneIn Radio Pro, Pandora, Amazon Music, Google Play all do the trick. TuneIn Radio is a great way to catch my old home station, KCRW and explore international radio stations.
Works great, especially since I’m many leagues from my immediate family.
Pretty decent, on par with the iPad native app
I have come to love Falcon Pro more than all my other twitter clients on any platform (Tweetdeck, Metrotwit, etc)
Great for consuming documents. The app allows for recording audio so I usually speak a quick note I want to make to myself. I’ve also started to leverage the Evernote Web Clipper in lieu of Instapaper for some web pages, especially where content format is important (e.g. sample code)
Not quite as good as watching on my PC but still a great option especially when I need to be away from the computer (e.g. watching a toddler)
10. Amazon Kindle
Where I read virtually anything from my library.
 There is still a mess. Technology doesn’t solve genetic/familial predispositions.
 I guess you could say I’m more of a minimalist or utilitarian. Animated page turns, fake leather calendars, “book shelves” all represent noise. Think Rococo versus Bauhaus.
 If you are working on saving toward a specific goal, try out Smarty Pig as a way of helping you put the money away.
 It’s not just the well known authors and best sellers. Some of the most interesting reads I’ve gotten from Amazon are the cheaper, self-published eBooks. Books like ANESthetized, which I would never find in a store.
 There are many reasons for this but let’s start with how bloated the software has become. I do love my iPod though so I’ve been using Media Monkey. Price was another factor though the iPad Mini didn’t have the technical specs to compete with the Nexus.
 It would be nice to see a variant that let me have at least 8GB of RAM though I think 16GB will be a requirement for my next machine.
 I should disclose that my wife has an iPad 2 that is as perfect a fit for her as the Nexus is for me.
Perl turned 25 a few days ago. What makes that a remarkable achievement is that the language remains cutting edge, pervasive, and useful in the present day. While there are many detractors to the language and its philosophy that I’ve encountered in my experiences as a programmer, I continue to like the language and perhaps one day I will even find a way for someone to pay me to use Perl.
My interest in Perl started with a college friend, Chris Nandor aka pudge. There was a small web development forum in our on campus BBS where people could post questions and answers. I always admired the facility with which problems could be solved in Perl. I don’t remember all the details (I think I used Fermats Little Theorem) but after being proud of myself for solving a Google job application puzzle on a billboard he decimated the problem in a few lines of Perl. Chris worked on Slashdot and did a lot of other cool things with Perl, things like voting for Nomar Garciaparra 14,000 times in balloting for the “all star” game[3,4]. Things like that made me think: what if I could figure out a way to use that language for my own devices?
I started with the O'Reilly Perl Books and enjoyed the wry humor with which they were written. It soon became apparent to me that unlike the faceless drone corporate developers writing code in Visual Basic with variable names like intCounter, the Perl community was an agglomeration of really smart, lateral thinkers with senses of humor to match the inventor of the language, Larry Wall.
Although I did work on a few things both for myself and for others, I never quite graduated to a big project or the Perl community unless you count mailing lists and listening to virtually every episode of the Perlcast. I never made it to YAPC or OSCON although it’s still a goal of mine to get to one someday. But my efforts with Perl really did pay off; as a part of learning the language I was forced to become proficient with Regular Expressions and my efforts in doing that led to nregex.com, my .NET Regular Expression testing tool.
I am biased but I think .NET developers should care about Perl for two reasons: the first is because it’s impossible to learn Perl without getting better at Regular Expressions, which is a portable skill. The second reason is CPAN, the Comprehensive Perl Archive Network (online network of libraries). There are thousands of modules in CPAN that deal with a lot of the muck that you run into as a developer if you’re dealing with “real world” problems that are boring but not really commercially viable for a big company to be interested in.
This is a long and belated birthday card, but what more can you expect from a fan boy? On my bucket list: one day port an interesting module from CPAN to .NET.
 TMTOWTDI (there’s more than one way to do it) vs there should be only one way to do things (Zen of Python)
 http://google-tale.blogspot.com/2008/07/google-billboard-puzzle.html – the trick was to just iterate through digits of e in blocks of 10 making http requests.
 Archived story at http://static.espn.go.com/mlb/s/2001/0624/1218244.html. This may not seem like a big deal since today’s Microsoft ecosystem has equivalents of LWP::Simple but remember the Perl community has had that for many years.
 Okay, some things that are more substantive like running use.perl.org but this is one of the fun ones I remember since I worked with some avid baseball fans at the time
 Such as Twig ( http://search.cpan.org/~mirod/XML-Twig-3.42/Twig.pm ) though I'm open to suggestions
At the local developer meetup someone was jokingly talking about “developer years” as an analog to “dog years.” It is kind of fun to think about – I didn’t get started until I had already finished school but there are a lot of people who already have years under their belts by the time they finish high school.
I probably toyed with the idea longer than it’s usefulness but came up with the following methodology for determining developer years: “How many data access frameworks from Microsoft have you lived through?” My start was in the waning years of DAO and RDO, just before they were replaced by ADO. I am not sure if anyone has a full list of data access APIs from then until now but my guess is that I am 8:
- DAO / RDO
- MDAC 1.x
- MDAC 2.x
- Linq to Sql
- EF 4*
- EF 5*
I am not counting minor releases (there were 9 for MDAC 2.x as an example). I also placed an asterisk by technologies I have never used in a production environment.
I wonder what other ways can you measure your developer years? How many times must we solve the age old problem of CRUD with a database? Or generating HTML for web pages?
I grew up on fantastic fiction. Even before a kid named Scott regaled the tales of The Lord of the Rings to me on our trips down Elgeyo Marakwet, I remember things like a nicely illustrated version of Hans Christian Anderson’s stories and a predilection to tales read by The Superscope Story Teller.
It may have been a blind choice on my end but I remember stumbling upon a character named Drizzt Do’Urden and the world of Faerun in books by R.A. Salvatore in college. I loved Salvatore’s stories so much I read everything of his I could get my hands on. Even as a slow reader, it wouldn’t be long before I was slowing down for the last chapter, trying to make the adventure last as long as possible.
A few months ago I ran across a podcast interview of Salvatore and made a shocking discovery: all these years I had the pronunciation of his signature character Drizzt wrong. Salvatore, with his true Bostonian lilt, sounds like he says “Dritts.” In my mind and with friends, I had always said “Driz-it.” It was a little jarring since my relationship with those books has spanned the last 20 years.
I would hope that Salvatore would not care as much. In the creative continuum his piece was done – to create a story and build the foundation from which a person’s imagination could complete the picture. This continuum exists in everything that we as humans try to understand. In the parts of our lives where we consume artistic expression we experience it in the movie scene that cuts off, the action between cells in a comic book, or the dramatic pause in a song where your head keeps bobbing.
It’s this continuum of creative production and consumption that was kindled in those early years of fiction; whether I was imagining Hans Christian Anderson’s mermaids or experiencing dread as I tried to conjure a mental picture of Medusa. It carried me forward to Salvatore’s dark elf and the other tales of might, magic, and adventure quest so intertwined with my formative years. Imagination of this kind was a wonderful and intoxicating thing.
It is the loss of this imagination that I have begun to lament with the success of Peter Jackson’s Middle Earth films; that world which had assembled itself in my head is now much more concrete as a sensory experience. It’s become increasingly difficult to imagine characters without their portrayals and respective actors. Middle Earth has become, for the most part, an ode to the physical geography of New Zealand.
Perhaps the biggest departure from my viewings of The Lord of the Rings film trilogy is the experience of putting on the One Ring. In my mind this was a sinister thing and yet as described by Tolkien prior to Frodo’s journey, it seemed a benign and even pleasurable experience of invisibility.
My point in all this is not to bash the films which to date I have loved and watched repeatedly. And I want to go beyond the book snob’s snide comment that films are always inferior. There have been many elements of interpretation from the film that have helped me understand the books more and Peter Jackson has been masterful, along with the cast and crew, at bringing Middle Earth to life. What I do want to pinpoint is how there is a loss of freedom with the imagination when something is done as well – how you lose the words could sound themselves out in your head any way that you liked, how it was you who remained in charge of the creative and artistic direction of a story that transferred from the pages of a book into your head.
There is no doubt that I will see and enjoy all of the films that make up The Hobbit film trilogy. I’m the target market, I’m a fan. As my imagination is filled I will notice more and forget a little. It’s the tradeoff of seeing and enjoying the film version of a book that’s been a part of my life.
 Geek’s Guide to the Galaxy is probably the best fantastic fiction podcast I’ve run into online.
 I’m always fascinated by how our imagination completes our religious belief. As a Christian, I find it interesting to look at renaissance artwork of Biblical events to see how outlandish or frozen in time this can be. For example, Bruegel’s Christ Carrying the Cross.
 I wanted to focus on imagination as an individual experience but when stories are circulated and popular enough, there is something of a collective imagination. One of Peter Jackson’s successes was tapping into the collective imagination that already existed for Tolkien’s stories and using popular Tolkien inspired artwork for keyframes in the film. Alan Le’s The Lord of the Rings Sketchbook is a great book on this creative process that drove the film.
 New Zealand has woven itself so closely to Tolkien’s world that now a large amount of tourism comes from people like me who want to experience it as Middle Earth. I’m not sure if that’s good or bad since it probably has merits of its own without being a fantasy kingdom but if someone gave me a chance to go, I’d be there.
Mark Dow wrote a little market forecast some months ago in August. His opening paragraph is a keeper, on why it’s not as much fun to be bullish.
“It’s not as fun to be bullish. Bears are smart. Bulls are wide-eyed optimists. Bears have data. Bulls tell stories. Bears make money when everyone else is in pain. Bulls make money when everyone else already claims to be a genius. In short, many of us get more satisfaction being bearish because the psychic payoff is greater: we calibrate our own self esteem not by our victories in absolute terms, but in our victories relative to others.”
I have to admit, I’m always a bit bullish but I’m more of the foolish optimist.