Wednesday, December 7, 2011

Entering the Black Arts of Automation

Well, this is fun.  With all of the testing I do now on Ubuntu Arm, it is time to look at automating as much as possible.  The "fun" part is that I haven't done much coding or scripting since 2001 (When Perl ruled the day, and Python was barely in existence).  The other part of the problem (for me anyways) is that most of the existing automation tools we currently use in x86/amd64 testing rely on tools that just don't work on existing arm platforms (kvm/libvirt, kexec, ipmi to name a few).  So existing scripts that do things like, say, "reimage a platform" need to be done from scratch. 

And I am torn between using the antiquated tools I know (or remember using in the past), or learning a bunch of new tools.  And the overwhelming "helpful" responses I get when asking for some help in certain areas usually are "Use this tool, it is easy."  Of course, those people are well versed in that tool.  Try telling someone that only knows how to edit in vi how to use latex.  Yea, not going to happen.  Don't get me wrong, I am all for learning new tricks.  But I can't justify the days it takes to learn how to do one task that someone who knows how can whip together in a few minutes. 

Also, I learn by books and examples.  I've already spent a lot of money updating my book library (2 Python books, XML, Expect, and a few others).  My 4x8 foot bookshelf is starting to sag under the weight of all the books I have accumulated over the years.  And, no, I don't like ebooks for this.  I just bought the Exploring Expect book for my Nook Color, and while the information is proving very helpful, it isn't as easy to bounce between sections like a good print edition.

But I am making very good progress.  It used to take me 2-3 days to fully test each kernel SRU update cycle (4 platforms across 3 releases), most of it hands-on.  Now, I can (almost) start a full test sequence with the click of the mouse and check the results in a day or two.  There are still a few kinks to work out (like automating the preseed configuration and monitoring the reimaging progress), but we're getting there.  Other parts of this testing were outside of my control (tests that fail because of configuration differences between arm and x86/amd64 kernels for example), but they too are being resolved.  I have also hit an issue in the past where an SRU kernel had an update from the vendor that disabled video on my test systems (well, disabled HDMI for LCD port - which I don't have hardware to test).  Had that kernel gone to the general public after testing only on an automated, headless environment....

Once I have the SRU process fully automated, I can focus on automating other jobs.  They should be fairly straight-forward, as the core work (reimaging) will be done, and I can just launch a job to install & run packages at will.

The other big (bigger) problem is (drum roll), infrastructure.  When I started on this route, I had 4 systems.  I now have 15.  Some of this can be deployed in our QA lab, mainly the headless stuff.  I'm not sure I could justify the expense of equiping the lab to be able to do remote desktop testing (KVM/IP for HDMI is expensive, and then there is audio, bluetooth, etc).  Server stuff yes.  Add to that the power relay I have (see earlier blog) works fantastic...on 4 systems.  It is expandable up to 256 relays, but my personal budget...well...

Some people also think I should focus on automating the desktop testing somewhat.  Well; 1) The interfaces enabling any type of desktop automation is currently broken in gtk3, and 2) The desktop changes too rapidly to automate (gnome 2 -> netbook-launcher ->Unity/Unity2D in 4 cycles).  That means scrapping/rewriting a lot of tests every cycle.  Much better to get tests for server/core stuff running now, and hit desktop later.  There is a lot that can be tested at that level, that desktop will also benefit from.


Hmm, I wonder if this blog could be automated.

No comments:

Post a Comment