gpollice

Archive for the ‘Computer science’ Category

Book Review: Lean from the Trenches

In Book Reviews, Software Engineering on January 29, 2013 at 1:53 pm

Agile, Lean, Kanban, SCRUM, … . Will it ever stop? No! Frankly, change is good. However, change doesn’t mean that it’s good for you. In the last decade we’ve seen an increase in the number of “new” approaches to software development and each of these has the fanatics who claim that their approach is THE BEST approach to building software. What’s a practitioner to do? All we really want to do is create great software, and it seems that when we get into a rhythm, someone comes along to tell us how to to it better—and we often find things getting worse. Then we’re told that we’re doing it wrong and we need to bring in an expert to tell us how to do it right.

Ever since I worked for companies that have built software applications and tools, I’ve been amazed at the willingness of people to decide to do something because it’s a best practice. Well, let me say that I think best practice is code for: “We have a way of working and we want you to work that way too. This way of working worked for us, therefore it will work for you.” This is usually delivered by a consultant or salesperson who wants you to buy their services or product. (Full disclosure, I am a consultant at times, and I try to help people work better.)

The problem is that all of these best practices are usually stated as fact, and the context in which they are good choices are often left out. I continually see teams that decide to adopt a process or tool and fail miserably because that method or tool is not right for them. This can lead to another problem, rejecting all of the new way even though there are some good things that will help you work better. Most books on software development process tell you the “right” way of adopting a new process or tool, but they do so in a dogmatic way: “do this and this will happen.” There is a scarcity of books that focus on the experience of adopting and customizing new methods. Lean from the Trenches, by Henrik Kniberg. (Pragmatic Programmers, 2011) is a great stride forward in correcting this.

If you have been hiding for the last decade, you might not know what Agile and Lean methodologies are. If you haven’t been hiding, you’ve heard of them andy may have some idea what they mean. In order to get a clear description of them, start reading this book by going to chapter 17, Agile and Lean in a Nutshell. You’ll find everything you need to know about these two there. Then go back to chapter 1.

What makes this, in my opinion, a great book is that Kniberg describes what was done on a large project for the Swedish Police rather than talk about how to do things in a perfect world on a made up project. As you read the book, you’ll feel like you are getting helpful advice from an experienced practitioner who knows how to fit the process to the project rather than shoe-horning a project into a process. For each part of the project, you will see how a technique is implemented and, more importantly, why that particular technique or adaptation was chosen.

Kniberg covers all aspects of the project in the first part of the book. He describes the daily cycle and the practices that support them. Since they wanted to use a flow model, they adopted Kanban to their project. He describes how they scaled the Kanban boards and how they used the techniques and tools to keep on focus to the main goal of delivering value to the customer. Planning, tracking, scheduling, managing requirements—they’re all in this book, but you get a specific instance of the overall process, not just the description. You see it in action and you get to listen in on the reflections of what worked and why.

If you’re interested in making your team work better, I suggest that you get this book before going out and hiring a consultant. You’ll see that a) it’s not much more than common sense, b) it’s a lot of work, and c) you’ll need to continually adapt. If you understand that, the rest is about the details and they’re unique to your situation.

Advertisements

Book Review: Team Geek by Brian W. Fitzpatrick and Ben Collins-Sussman

In Book Reviews, Software Engineering on January 1, 2013 at 7:37 pm

Yesterday I posted a lukewarm review about Driving Technical Change. I had high hopes for it and it just didn’t resonate with me. Today is a different story and a great way to start off the new year.

The bottom like for this book is: If you’re a software engineer who wants to do great things, enjoy your work, and work on a super team, read this book! I’ve been going through a lot of books on the subject of technical leadership, soft skills for engineers, creating great development teams, and so on. I’ve found a few that I thought were quite good and they either have become or will become part of the material for my software engineering, and other software engineering based courses. But this book trumps them all so far. If you only read one book in the next year on this topic, read this book.

The authors, Ben and Fitz, work at Google. They’ve held several positions with Google and other companies. They have been associated with the Subversion open source project for years. In short, they’re real engineers who know how to build and shop software. They also have a great insight into what it takes to build a great team, how to keep it together and growing, and how individuals can become a part of such teams.

Ben and Fitz have clearly experienced many of the same types of organizations and personalities that I’ve encountered over the course of my industrial career.When I read an early section, “The Contents of this Book are not Taught in School,” I was hooked. They say “At press time, we’re not aware of any curriculum that actually teaches you how to communicate and collaborate in a team or a company. Sure, most students are required to participate in a group project at some point in their academic career, but there’s a big difference between teaching someone how to successfully work with another person and throwing him into a situation of forced collaboration. Most students end up jaded by the experience.” Well, if you’ve been in one of my software engineering classes, you know this isn’t true. That’s exactly what I try to teach and pretty much the way I do it. These two and I share a similar mind set.

The book describes several types of people, and provides stories of how to get these different types working well with a team while maintaining a team’s culture. They have great stories that I was able to identify with. They don’t try to mold these stories into patterns. They just tell it like it is—and it works for me. I finished the book in a few hours of enjoyable reading.

Ben and Fitz have had the opportunity, it seems, to have been able to choose the types of teams they work with as well as the type of companies that they’ve been employed by. Not everyone has that luxury. What the authors think dysfunctional companies and organizations are simply the norm for a lot of folks. Don’t let that put you off if you’re in one of those type of organizations. You’ll find good advice on how to navigate through the bureaucracies and get things done.

If you are a software engineer—whether you’re starting out or have been at it for years—read this book if you want to be better at what you do and if you want to get more satisfaction from your job. If you don’t get anything out of it, you are either the perfect software engineer (which is about as real as the Easter Bunny) or you’re in the wrong profession and should probably look for something else to do. The time spent reading this book will yield many rewards.

Book Review: Driving Technical Change by Terrence Ryan

In Book Reviews, Software Engineering, Uncategorized on December 31, 2012 at 3:43 pm

I’ve been going through several books lately on technical leadership. I’m interested in this for my software engineering course as well as a course on software project management that I’m putting together with a colleague. I was hoping that this book would have some real gems that I could use for material for these courses.

When I started the book, I was pretty excited. The table of contents started out with the right topics, in my opinion—Defining the Problem, Solve the Right Problem. But that was about the extend of the usefulness for my purposes. After this, the author goes into identifying various archetypes that one might encounter in software projects. These are the Uninformed, Herd, Cynic, Burned, Time Crunched, Boss, and Irrational. In order to give his choices credibility he applies the “Pattern” buzzword to them. I think he definitely identifies many of the characteristics that software developers encounter, but they just didn’t form a unified picture for me. It might be the fact that he keeps using adopting version control as one of his main examples. I don’t think this is a major issue today, but I could be wrong.

Ryan then goes into techniques for helping sway the opinions of the types he defined. The descriptions are not overly detailed, nor do they need to be. Scanning through the techniques to get a basic understanding of what they are and how to use them is worth the reader’s time. These will add to your toolbox of soft skills that you can use to be a more effective member of the team. These are not leadership techniques, but ones that any engineer can use to help introduce change to a project team or organization. The ones I found most relevant to my career are Gain Expertise (I just think of this as continuous learning), Demonstrate Your Technique, and Create Something Compelling.

The final section is a set of strategies one can adopt to systematically get changes adopted. After reading the first two sections, this should be almost self-evident to a smart reader.

If I were grading this book, I’d give it a B-. It’s somewhere in the middle of the pack for me as far s books that would help software engineers become leaders and effective team members. I think that even though the book is relatively short (213 pages), it could have been presented just as effectively as a series of a few blog posts—maybe that’s how it started.

The bottom line, from my viewpoint is: if you have some extra money for books on soft skills, it won’t be wasted on this book, but if your budget is more limited, look for a book that has more bang for the buck.

Why is it so hard to find books for a Webware class?

In Computer science, Teaching, Web on May 10, 2012 at 9:06 pm

I just finished teaching Webware at WPI for the fourth time. I seem to get the opportunity to teach it every two years. The course is supposed to give the students a basic understanding of how to build applications that are delivered over the Internet, typically through Web browsers. The course description in the catalog is a little bit out of date, but it covers the main things students should expect to encounter in the course. Each time I’ve tried to teach this course I’ve struggled to find a single book that covers all of the material in the depth I expect my students to be able to handle. Each time, I’ve been unsuccessful.

The first problem that always seems to occur is the fact that books on Web application development are almost always out of date. This occurs more with this course than any other course I know of. The landscape of Web development changes at such a rapid pace that by the time a book is ready—whether it’s a new book or a new edition of an existing book—it’s quite out-of-date. It seems like the only way to keep up-to-date with Web development is to scour the Web for recent developments. If you’re like me and only teach Web development every couple of years, you probably don’t have the time to spend the time required to do this regularly. So, like me, you end up cramming for the information every time you’re going to give the course.

The second problem with finding a book, even if you can find one that’s recent is that so many of the books try to survey the Web development topics rather than go into depth in any of the subjects. If you’re blessed with having really good students, as I am, this becomes a problem. The books are just not something that they’re going to get much out of. The last time I taught the course we didn’t use a book. That didn’t work. Students want a book that presents the material covered in class. They want something they can read and study, all in one place.

The second time I taught the course we used the current volume of Sebesta’s Programming the World Wide Web. Some parts of it were quite good, but not enough to make the book a good fit for my class. Random chapters on PHP, and other topics that the students were never going to use for the class, with reasonably shallow treatment of the topics were reported by the students in their course evaluations as the major negative of the course.

The fact is, that you need material from several sources. The problem is that books are really expensive. This time I thought I’d try a custom textbook. There were three books that I found covering JavaScript (JavaScript by Example), Web site design (Basics of Web Design HTML5 & CSS3), and general HTML and Web topics (Internet & World Wide Web How to Program).  All of these were from the same publisher, Pearson, and they offer the possibility of customizing a text for a class.

I selected the appropriate chapters from each book, submitted my choices, and then found out that two of the books were not available for custom texts. That left me with just Deitel’s Internet & World Wide Web book, the weakest of the three. I have found that most of the Deitel books are rather shallow in their coverage of a topic and are at a level that is much too easy for most of the students I have. However, time was running short so I thought I’d select just a subset of the book that addressed topics I knew I’d cover. Since the Deitel book is a paperback book costing over $120 (with extra cost if you want their on-line chapters), it seemed too expensive for the value of the content.

It turns out that the sixteen chapters that I selected cost $91. At face value, this was a little bit better. However, the whole Deitel book was selling for less than $90 on Amazon. What really sealed the deal for my students buying the book from Amazon was that the custom book ended up getting printed from an old edition, not the current edition that covered HTML5 and CSS3.

I’m still hoping that it will be possible to create a custom text for this course. I may try to get it from more professional publishers like O’Reilly and The Pragmatic Programmers. The books from these publishers were ones that I actually used more than the broader Deitel book. I’ll provide a review of these books in my next post.

A Case for Virtualization

In Computer science, OpenSource, Software Engineering, Tools on December 7, 2011 at 9:56 am

Maybe I’m late to the party, but until recently I had not paid much attention to using virtual machines on a regular basis. I figured that since I had a couple of desktop Windows systems and a couple of Mac notebooks, I’d be able to use almost any software I wanted without any problems. Well, that’s probably true, but when you begin to use lots of different software tools, especially open source tools that have many dependencies, things begin to get really sticky. And, when you try to figure out how to put together the software needed for students in your courses…well, let’s just say that if I had any hair left on my head, it would be gone. I am convinced that dependencies are the hobgoblins of open source.

If you’ve read some of my more recent posts, I’ve been playing with Cucumber for use in my upcoming Testing for Developers course. I want the students to be exposed to lots of tools and that means that we have a need for lots of different languages, especially some of the dynamic languages, on the system. While Microsoft makes developing software for Windows relatively easy, it’s not my favorite platform for doing anything outside of the Microsoft ecosystem. I understand that and I’m not complaining. In the words of Bill Belichick, “it is what it is.” So, I do almost all of my development on my Macbook. It’s fast and it’s pretty much Linux (the key words here are pretty much).

If you stick to just Java, C, or C++ it turns out that almost any platform works as well as the other. I mean, if you load MinGW onto a Windows system you have the ability to develop from the C family. Add Eclipse and you’ve got a pretty portable IDE. But, when you start to look out from these languages, things get really murky.

A Concrete Example

Let’s look at a real example of what I’m getting at, setting up Cucumber for the class. We’ll start by just considering my Mac setup. Cucumber is written in Ruby and works really well if you have simple scenarios that you use to exercise some simple Ruby code. By simple Ruby code I mean code that has no GUI, database, Internet, or other features. In other words, code that’s probably not very useful for a real application.

As I went through the examples in The Cucumber Book, I began to need additional components like Sinatra, Capybara, and a many others that I have no experience with. Well, how do you get all of this to work? Do they all really work together seamlessly? Maybe, if you have the right versions and the right versions of Ruby and other gems that you have.

It turns out that I had four versions of Ruby on my system. I had ruby 1.8.7, 1.9.1, 1.9.2, and MacRuby. What’s worse is that I didn’t realize it. Some of them were installed in a standard location (whatever that means). If I reordered the sequence of directories in my PATH, I got a different version of Ruby. I needed some consistency. I was using Ruby 1.9.1 and things were going along fairly well when I needed to get the service_manager gem. I installed is using the standard “gem install” approach. Then, when I tried to use it, there was a problem with readline.

After several hours of searching the Web, tweeting to a few friends, posting on stackoverflow, and The Cucumber Book forum, I got an answer that let me get past this. But things had deteriorated to the point that I had no idea what was installed, where. I had used bundler for some things, gem for others, and done some downloading packages, source code that I built, and other ways of getting the software on my system. I also tried rvm, but frankly by this time I wasn’t sure what I was doing and the documentation wasn’t very helpful. Do you need to be using bash? How do you get it working with other shells? I was sitting on a software stack that was about to come crashing down. I could feel it in my bones.

Rewind

I decided to remove everything Ruby on my system and start over. Okay, that was another few hours, making sure that I copied everything I removed just in case I really messed it up. I finally was ready to start over. Thanks to Matt Wynne, one of the authors of The Cucumber  Book, I ended up using rvm, the Gemfile from the latest version of the installation appendix in the book, and I got the right version of Ruby installed along with the gems and packages I needed. Now, this wasn’t perfect, but it was manageable. Some of the documentation was a little bit off and I had to get on the rvm IRC channel to ask for help, but I did it.

Enter the Virtual Machine

Well, this was great. I now had a consistent installation on my Mac. Most of my students don’t use Mac. Some use Windows, some use Linux. Since they’re computer science students—advanced undergraduates and graduate students—I’m safe assuming that they know their way around Linux. If I could get the software configured on a Linux virtual machine, I could just give the students the VM image. Then I could either add new software for other topics I expect to cover in the course or simply create a different VM image.

I’ve been using VirtualBox for some time now in order to try out different versions of Linux, running Windows on my Mac, and so on. It’s free, runs on any x86 architecture, and just seems to work pretty well. So, that was the plan. I created the VM, loading on the latest Ubuntu Linux, made sure that had some basic software like Java and a few other components. Then I went through my checklist of how to get the right Ruby and Cucumber installed and in less than a half hour I had it running. Take a snapshot so that if I try to extend the image and mess it up I can back up. Voilà, the students can work without having to lose sleep over getting their system set up. There are other things they’ll lose sleep over that are much more important.

The VM approach, and the ability to snapshot and branch off of different snapshots makes life so much easier for me as a teacher and it will save the students a tremendous amount of time. I’m sure some of the uber-geeks in the class will set up the software on their systems because that’s in their DNA. However, for those who really want to concentrate on the course topics, they can now do that without the hassle of being a system administrator. I will definitely use this approach more in future courses.

Cucumber is Cool and so is The Cucumber Book

In Book Reviews, Computer science, Software Engineering, testing on December 1, 2011 at 2:37 pm

So, I’ve been learning Cucumber—a tool for writing executable acceptance tests—in preparation for my upcoming course, Testing for Developers. The course will be heavy on the theory of testing, but from the viewpoint of the developer. How do you build testability into your code, what techniques can you use, and what tools are there that will help you do test automation. Most of my students will already know the basic xUnit tools, but we will be going much more into mock objects and other techniques for writing really good unit tests. We’ll hit Test-Driven Development (TDD) hard and we’ll look at how TDD and Behavior-Driven Development (BDD) work together, etc. We’ll also be wanting to work with embedded software systems and building test tools when necessary, or at least the glue to help tools work together.

This is an ambitious undertaking. I had heard about Cucumber when I looked at RSpec several months ago. RSpec was a bit too limited for what I wanted. Then I got a copy of The Cucumber Book by Matt Wynne and Aslak Hellesøy, published by The Pragmatic Bookshelf. After reading most of the book and a few weeks of playing around with Cucumber, I’m hooked. Cucumber takes the place of several tools I was thinking about introducing to my class, and allows me to into depth in several areas, using a single platform. If you’re thinking about learning Cucumber, or want to learn something about it, get this book.

The style of the book is great. It’s easy to read and describes how to use Cucumber in several bite-sized pieces. Some readers  may even find that the pieces are nibble-sized and want bigger bites. I urge you to avoid this. Install the software and go through each exercise in the steps described, even though you know what you might need to do in the next few steps. This will help you develop your intellectual muscle memory for using Cucumber. Take time to taste each bite, no matter how small it may be, because there is some new spice that was added that might be subtle if you skip over the step.

The first part of the book, chapters 1-6, introduces you to Cucumber fundamentals. Actually, this part does a little more. It teaches you something about BDD as well. If you’re really not interested in learning about BDD, you can skim chapters five and six, but I would not recommend skipping them. By the end of the first section I was able to do some useful things with Cucumber and used it to demonstrate requirement specification to my current undergraduate software engineering class.

The second part of the book, chapters 7-10, are chapters you need to read and work through if you want to become competent with Cucumber. Again, some of the discussions seemed too small to me, but after reflecting upon what I learned, I think the level is just right. Some of what you learn in this part of the book is how to work with some other tools to use databases and Web pages in your testing.

The third part of the book is one that does not have to be read in any special order, or even at all. But, if you really want to understand how to use Cucumber to define and (acceptance) test Web applications (written in Rails or otherwise) and other types of applications, you need to have these chapters available. So, skim them at least and then refer to them as necessary.

Is there a downside to this book? Well, not to the book itself. The fact is that Cucumber, like many other open source projects works in a specific inter-dependent ecosystem. Setting up Cucumber and its dependents and other pieces you might want like Sinatra, Capybara, etc. can be frustrating; especially if you want to do this on multiple platforms. If you’re not a Ruby expert, you may run into problems and will need to do some searching for answers or post to the forum for the book at the Pragmatic Bookshelf. But don’t let this scare you away. Cucumber is a good tool to learn. Whether you use it regularly or only for special occasions or specific projects, you will think a little bit differently about how you build software.

Agile and the second chasm: history does repeat itself

In Computer science, Software Engineering on November 8, 2011 at 5:05 pm

Kent Beck posted an interesting essay on the Agile Focus blog back in February. I think it’s a really insightful, and important statement from one of the Agile elders (sorry Kent, but accept the fact that we’re getting older). Although this is eight months later, I just read it due to a re-tweet of the link by one of the people I follow. A downside of the Web and the age of instant communication is that there is an exponential explosion of content, much—maybe most—of it is simply noise.

I’m glad that I came across this post though. It made me think hard about what he’s saying. I have found that over the years, I agree with Kent much more than I disagree with him. I remember being on a panel at conference back around 2002 on XP, Agility, and process in general and it seemed that Kent and I were on the same side, even though I was there officially representing Rational and the Rational Unified Process team.

The other panelists, all Agile consultants, were promoting dogmatic adherence to all of the XP practices. They indicated that they forced all team members to do all of the practices all of the time when they were engaged to help teams us XP. Well, if you look at XP, that’s what you’re supposed to do. But, that doesn’t always work. You need to use some common sense.

Kent and I were suggesting that the specific set of practices and how dogmatic their application should be appropriate, not just a blind application from a book or some other source. In short, it would be better for the team to succeed by adopting a handful of practices and using them in a way that suited the team than it would be for them to turn all the XP practice dials up to 10 and fail.

During the Q&A period, Mary Poppendieck made the comment that except for Kent and I, the other panelists sounded like the process police. I found this quite funny. A significant part of the Agile movement, besides trying to discover better ways of delivering software, was to combat RUP and other process products that were deemed to be too restrictive and dogmatic (they also had a significant portion of the market). Getting into the Agile camp allowed consultants who worked individually or in fairly small companies gain traction in places where they might not otherwise have been able to compete. Certainly large companies were reluctant to get away from the IBM Rational security blanket. They may not always succeed, but they followed the conventional wisdom of “you can’t go wrong with IBM.”

So, here we are more than ten years after the Agile Manifesto was signed. Are things different? Well, yes and no. Certainly we have added many new tools to our software engineering toolbox. We look at software development differently in certain areas. We are tend not to inflict heavyweight methods when lighter ones will do. In fact, we are probably guilty of erring to the opposite end of the spectrum now and not formalizing things when we probably should. After all, we don’t want people to think that we’re not Agile!

One might argue that many of the changes we’ve seen would have occurred naturally, without the Agile movement. But, however the changes have come about, I think we have a better set of tools—intellectual and otherwise—to use in our craft. What I don’t think has changed to a large extent is the ability for software developers and managers to use common sense when deciding how to work. They still want someone to tell them what to do and how to do it.

At the end of the 1990s, the RUP was sold as a process framework that you could customize to your team, the type of project you had, and the overall organization. Customers who did were usually quite successful. Those who thought they had to do everything in the several thousand pages of advice in the RUP had some spectacular failures. Of course Rational, and later IBM, was happy to send consultants to your company to help you figure out the right configuration. That is, someone would tell you what to do, how, and when.

Today, Agile consultants come and help you adopt Scrum (certified Scrum Masters), or XP, or Lean, or Kanban, or … . It doesn’t matter what. We don’t want to think and reason about what’s best for us and our project teams. We want someone to tell us. Perhaps we think it will allow us to avoid taking responsibility for the outcome of our projects; never mind that a cornerstone of Agility, or any reasonable methodology, is reflection by the team and adjusting the process. That seems like too much (non-productive) work. We can hire someone to tell us what to think and how to do our jobs. After all, they’re the experts, the consultants who have the experience.

So, we’re right back where we were ten or twenty years ago. We let people tell us what’s good for us. We don’t think. We let others do that for us. We’re always riding the crest of the next wave, be it XP, Scrum, Lean, or whatever. If we’re current, we must be doing the right thing. That may be true only if the current thing actually applies to what you’re doing. In 1999, Alistair Cockburn talked about Methodology per Project. I think this essay went unnoticed at the time. That’s a pity. I think there are some real nuggets in what he talks about. Of course he’s a consultant trying to convince you that his Crystal family of processes are right for you. Simply put, every project and project team is different. We don’t manufacture software, because each program is different. If we write the exact same code multiple times then we’re idiots. Ours is a craft. We produce one-of-a-kind products. If our products are one-of-a-kind, then maybe the way we produce them should be as well. That’s the key.

If we don’t start smartening up, we’re going to be in the same place ten years from now. Oh, there will be new methods and tools, but we won’t really know how to use them. We’ll hire someone to tell us what to use and how to use it. Same old, same old. Unless we start thinking for ourselves, reflecting upon our experiences, and taking our own destiny in our hands, we’re doomed as a profession. As the world’s appetite for software increases, we’re going to sit back and hope someone can show us how to do better.

When students get out of my software engineering course is that they are able to evaluate tools and methods and pick the right ones for their projects without someone else telling them what to do.

Are the CS classics still relevant?

In Computer science on July 23, 2011 at 11:11 am

After years (decades) of waiting, with little teasers in the form of fascicles (a section of a book published separately), volume 4 (well, 4A) of Donald Knuth’s The Art of Computer Programming is finally available. This volume is about combinatorial algorithms. TAOCP is probably the definitive book on algorithms and the one that is most referenced in scholarly work. For many years computer scientists and programmers earned their stripes by using something from the first three volumes in programs they wrote. There was something about being able to say that the really cool routine you wrote to improve database access was derived from something you found in Knuth. Whenever a question about how efficient a program or routine was and whether there was a better way, you went to Knuth for the definitive answer.

So, to me, the release of the new volume is a really big thing—a really big nostalgic thing. I wonder how many of the current generation of software developers are excited, or even care about this release. There is no doubt that the number of people involved in computing has increased dramatically. Many of these people are programmers and not computer scientists or mathematicians. I don’t say this to belittle them. What they do is an honorable, important job. Decades ago, the state of computing was such that you really needed to be much more knowledgeable about math and the inner workings of computers than you do today if you wanted to write a program. Clearly, this is a good thing because the world’s appetite for software seems to be insatiable and we need to have as many people as possible preparing the dishes for the ravenous beast.

How many of this generation of software developers are able to, or even care to mine the depths of the works of people like Knuth to find the gems there that are hidden from mere mortals? I have the utmost respect for many of today’s technology heroes, but to paraphrase Sen. Lloyd Bentsen, “I knew Knuth, and these are not Knuth.”

Perhaps there is too much for any one person to know about computing today. That’s been true for decades, but Donald Knuth knows as much about the core concepts than anyone I can think of. I hope that we will see a renewal of interest in Knuth’s works with the publication of volume 4A of TAOCP and that it will inspire students as it did me and many of my friends.

What other people and classics are we missing today? I’d like to compile a list of those which inspired people and stand head and shoulders above others in their field.

Portfolios are better than certification

In Software Engineering on March 29, 2011 at 12:29 pm

A few days ago Martin Fowler posted an item on his website about certification and whether certification correlates to reality. This got me thinking about certification efforts that have gone on in the past and that are currently being considered. I keep coming back to the same question—is certification worth it? Put another way, does the certification for software developers makes sense?

Many years ago, and I do mean many, I decided to become a Certified Data Processor. The CDP is a certificate that was issued by the Institute for Certification of Computing Professionals. The ICCP still has a presence and offers updated certification. Even when I was a consultant I never placed CDP at the end of my name. I don’t know whether it was important or not. To me it was like any other exam, such as the SAT or GRE, that I looked at as an intellectual challenge without very much use, except to get me accepted into a school I wanted to. I’m not sure it really said anything about whether I was qualified. The ICCP has newer certifications, but these are approximately equivalent to the ones in effect when I took the exam. In fact they indicate that “holders of the previous certifications qualify for the current certifications.” Does this mean that knowing how to draw a flowchart indicates that I can create a UML diagram? So what.

The ACM and IEEE have been working on certification programs for quite a while. IEEE offers exams that, when passed, enable professionals to claim certification. And yet, this has not caught on. Why not?

Last week in a department meeting we had a brief exchange about certification. There are certainly different viewpoints among faculty members. When we consider macro software engineering which involves very large projects, often involving hardware systems, the set of skills seems to change more slowly than those required to be a competent micro software engineer. Heck, the projects often take years to get to release. The macro software engineer will typically spend more time on project management of the whole system, scheduling, validation, verification, process, and so on. These are certainly necessary and valuable skills. But does one need certification? And, if they do, does this have anything to do with the software developer trying to build a particular module?

At the micro level things change much more rapidly. New languages, new technology, and new practices that enable one to deal with changing environment pop up regularly. Should we have certification specialized for web developers, embedded software developers, IT applications, and so on? If so, where do we draw the lines, and can we keep up with all of the advances?

When I was at Rational the RUP was a framework that we felt described the process of building software. Those companies that succeeded using RUP were those who understood that the framework needed to be specialized and customized. Those who attempted to apply all the practices and advice in the 3000 or so Web pages were doomed to fail. The fact is, that one size does not fit all. We must consider the context—project, people, environment—in order to be successful in a software development project.

There’s another, more practical reason why certification will not work. There is just too much software that needs to be created and not enough people to do the work. Will the lack of certification really cause a company to not hire someone who can get the job done?

So, if not certification, then what?

Portfolios can correlate to competence

A few years ago Pete McBreen penned his book, Software Craftsmanship. If you haven’t read the book you should. McBreen likens software developers to craftsmen of previous centuries. We learn through a series of apprenticeships. Just as someone who wanted to be a jeweler would sign on with a master jeweler as an apprentice, new software developers begin their career by learning from master software developers. The apprentice jeweler might spend years to become a master jeweler, he might just as well have become an apprentice for different master jewelers, and become a journeyman. So too, one software developer might become a master and specialize in one type of software development, while another might become competent in several areas and become a modern day journeymen.

With this model in mind, I believe there is a better way of determining who can build the next generation of our software. If you are looking for jewelry, a piece of art, a house, or any number of things that require customization in craftsmanship, you don’t look for certification. What you do is look for a person with a portfolio that indicates she is capable of producing the type of artifact you desire. The same approach can be used to find software developers.

I encourage my students to begin putting their portfolio together early in their academic careers. As they proceed through the university they should add to the portfolio so that when they graduate they have something to show prospective employers about what they can do by  showing what they have done. Then, as they advance in their careers they can add to this portfolio and decide whether they want to be a master of a small set of skills or a generalist. There are some—but few—who can be masters of everything.

This approach, in my opinion, makes much more sense than looking for certification. Certification simply means—in the way it has been and is being implemented—that the person has learned how to pass one or more tests. The portfolio lets you see a body of work and will often show how the person has advanced. This let you fine people who not only have a set of skills, but more importantly showed that they can learn and apply those skills.

Isn’t this what we really want from our software developers?

Arduino Part 3

In embedded software on February 19, 2011 at 8:10 pm

Learning More–getting ready for the class

I’ve learned a bit more about the Arduino in the last couple of weeks, since my last Arduino post. Most of the information has been about using Eclipse and the Arduino Uno. The basic project I completed last time was the simple blinking example on the Decimilia board. This is an ATmega168 processor and runs just fine. However, when you go to Eclipse, you need to make sure that you set the clock frequency to 16000000 and not 1000000 as in some of the documents. Other than that, everything works just great.

Enter the Uno

The Arduino Uno is the latest version of the Arduino. I ordered a starter kit from Amazon. It was a duemilanove kit, but shipped with the Uno. It has the ATmega328p processor. It worked just fine with the Arduino IDE, but when I tried to duplicate the steps to get it working with Eclipse, I failed miserably. It’s taken a while to track things down, but I think I’ve got it pretty much sorted out now.Here are some of the things that I’ve found helpful.

  1. When you set up the AVRDude configuration for the Uno, the baudrate should be 115200 instead of 57600 as some of the documents indicate. This will get rid of the annoying “stk500_recv(): programmer is not responding” message when AVRDude runs.
  2. In order to see what’s going on with the Arduino IDE, you can modify the preferences.txt file to add some verbose options. The file is in the Library/Arduino directory under my home directory on the Mac. A couple of the useful things I’ve found are adding the following lines:
    • upload.verbose displays the command line used for invoking AVRDude.
    • build.verbose=true displays the output of the build process.
  3. I’ve copied the Uno core.a library created by the Arduino IDE and am going to just use it rather than trying to create an Eclipse project for building such a library like I did for the Diecimilia. This will reduce the number of things I have to work with and it is the preferred way of getting the library according to the installation. One of the problems is finding the library on the system on the Mac. The location of the libraries is specified for other platforms in the Eclipse installation instructions from the Arduino playground. It took a little while, but after putting in the “build.verbose” file in the Arduino IDE’s preferences.txt file, I found the directory. It was in /var/folders/7A/… . Not the most obvious place. I moved this into my a lib/arduino directory in my home directory.

After all of this, I still am not getting the program to execute. It compiles, loads, but does not run on the Uno.

Finally: Success

So, it was back to the drawing board. I went back and recreated a separate project for the Uno core library. Then I compiled that and created a new project for the Uno blink program, and it worked. It’s still not the easiest thing to make sure you get all of the settings correct, but it is nice to see the light blink.

Next, I think I’ll work on building the project in a more simple environment, like a text editor, command line compilation, and make.

First, it’s probably worth recording the settings I’ve got on the two projects.

Uno Core Library Project

I simply copied all of the sources from the …/cores/arduino directory to the top level of this project. I deleted the main.cpp as the Eclipse installation directions said. Next, I followed the directions from the Eclipse installation page exactly. The relevant preferences settings for the project are:

  • AVR>Target Hardware: ATmega328p, 16000000 MHz.
  • C/C++ Build>Seettings: no debugging information, and optimize for size on both compilers (C and C++). Also, the other optimization flags for the C++ compiler are”-ffunction-sections -fdata-sections -Wl,–gc-sections”.

I have both the debugging and release configurations set for this, but it only needs release. We’re not really worrying about debug configurations at all here.

The Uno Blink Project

Again I followed the instructions in the Eclipse installation directions. There was one thing not covered which got the library included in the linking. Here are my settings:

  • AVR>AVRDude: Uno programmer configuration. No switches set under Advanced tab.
  • AVR>Target Hardware: same as above.
  • C/C++ Build>Settings
    • Additional Tools in Toolchain: Generate Hex Files for Flash Memory and Print Size are checked.
    • AVR Compiler>Directories: “${workspace_loc:/UnoCore}”, where UnoCore is the name of my Uno core library project.
    • For both compilers (C and C++), no debugging and size optimizations.
    • AVR C++ Compiler>Directories: same as for the AVR Compiler.
    • AVR C++ Compiler>Optimization: -ffunction-sections -fdata-sections -Wl,–gc-sections
    • AVR C++ Linker>Libraries: Libraries (top): UnoCore, which is the name of my library for the Uno core. Libraries Path (bottom): “${workspace_loc:/UnoCore/Release}”.