Thursday, June 12, 2008

Converting a Class Library Project to a Test Project

Here's a VisualStudio 2008 tip. Have you ever created a C# project of type "Class Library", and then later you want to change it to a Test Project? Here's how:

1. Edit the .csproj file in Notepad and insert the following line:

{3AC096D0-A1C2-E12C-1390-A8335801FDAB};{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}


Just before the first tag in the file

2. Right-click the solution in Solution Explorer and Add New Item... Choose new Test Run Configuration.

3. If your test cases need some data files (such as xml files) you can either edit the Test Run Configuration and add them to Deployment, or add the attribute

[DeploymentItem("MyXMLFile.xml")]

Monday, December 03, 2007

Learning Java

Most developers live in one camp or the other. Few people who work inside the Microsoft ecosystem (Win32, .Net, ASP.Net) spend much time in Java, and vice versa. That's what makes religious wars about languages IDEs so lame; most programmers have never seriously worked with both.

That's why it's refreshing to see Erik Sink trying out Java. I spent last winter going from C# to Eclipse and can echo many of his points:

  • the Java string comparison (can't use ==) is a huge gotcha for C# programmers. It fails silently (no compiler warning or runtime exception). Which is fine considering that's the Java rule for strings (use equals()). CheckStyle or some other lint-like tool can catch this.
  • The key bindings take getting used to. F11, F6, F5, F8 in Eclipse, instead of F5, F10, F11, and F5 for Visual Studio for Debug, StepOver, StepInto, and Continue. I could re-map these, but I think it's better to simply get used to them (which gets harder as one gets older!).
  • The Java ecosystem is much more active and innovative. Spring, Hibernate, and other fascinating efforts occur here. Over in the .Net world, people generally sit around and wait for Microsoft to do something. Or they port good ideas from the Java world.

Thursday, August 09, 2007

Chief Programmer Teams

I found an ancient programming book: Top-Down Structured Programming Techniques by Clement L. McGowan and John R. Kelly (1975). It describe Harlan Mill's project at IBM where he pioneered CPT. Guess what. It worked. These reasons for success are easier to see from an agile perspective. Early 70s software development was primitive. The existing paradigm structured programming seemed solely concerned with control flow. Avoiding spaghetti logic is a good thing. But what about data? No talk about global variables, and how to avoid them. Also no talk of data structures, design patterns, module dependencies, or frameworks.

CPT's good features
  • Code centric. Unlike later methodologies that became focussed on design artifacts such as diagrams and object models, CPT was focused on code. Good clean readable code accessible to all team members.
  • "Automated" development environment. In the sense that programmers were supposed to focus on programming and a librarian (human being) managed builds, test runs, source code control, and backup. This also promoted early integration of all code (avoid Big Bang System Integration nightmares). And buildable-code every day.
  • Top-down programming. write top level code and wrote stubs for lower level. So module M would be written with all lower-level modules that it calls stubbed out. This led to 'buildable code every day' and some sort of testing.
  • "structured programming" approach promotes abstract thinks. The topmost levels consist of fn calls to lower levels. In order to understand the top level you really need to understand what each subsystem call does -- its contract.
  • Chief Programmer does the design. Other methodologies split design work out across sub-system teams, leading to inconsistencies in naming, approaches, and quality.
  • promotes metrics ('development accounting'). #bugs, #builds, LOC

CPT has some bad things

CP is supposed to be many things:
  • senior-level programmer
  • experienced professionals
  • highly skilled in analysis, spec, design, coding, testing, and integration
  • capable of managing a team (cost, time, resource requirements)
  • capable of working with senior management
  • capable of working with the customer
There is no talk of prototyping. Top-down programming really requires that you know where you're going. You're supposed to decompose and decompose. What if you reach a lower-level and realize that a module is impossible or impractical to implement?

Top-down assumes monolithic software where the programmer has control of main() and the entire application stack. And indeed in the old days that's exactly how it was. Modern systems are event-driven, loosely coupled, and cross multiple technology boundaries. A user browses to a web site, whose server-side code invokes a web service, that updates a DB, that has a db trigger, and queues a job request in MSMQ, that generates some content, that notifies 'subscribers' using RSS. The programmer relies on the underlying platform to connect these pieces up using mechanisms and interfaces of the platform's (not by the programmer's) choosing.


All in all, CPT reminds me of open-source programming Linus Torvalds-style, pre-Internet where everybody is in one room.

Monday, June 18, 2007

On Code-Generation Tools

The 90th Percentile had an article bashing code generation as a programming technique. He is suspicous that visual programming is no better (and in many ways worse) than textual programming. The generated code is often unreadable, and in the case of proprietary tools, you'll be forever dependent on the tool vendor for bug fixes and updates.

The IVR industry is especially wed to visual tools, because they actually work well. For DTMF apps, that is, whose structure is basically a tree. However, the tools promote the anti-pattern of putting all the business logic in the IVR equivalent of the onClick event.

I would defend code-generation for cross-language problems such as build and deploy tools, or backup tools that are a combination of code and scripts.

Tuesday, June 12, 2007

Mondrian at Google

Python's inventor Guido van Rossum is at Google. His first project was a tool for code reviews called Mondrian. Described here and in a video.

This is a revealing glimpse of a 21st-century software development organization.
  • Heavy use of tools to automate the organization's own development process.
  • Social not silos. Developers can view other developer's Mondrian dashboard. The tool is an enabler, not a rule enforcer. The review process itself builds relationships between senior and junior staff.
  • Save everything. Mondrian saves every reviewed source file and all comments. Great for resolving customer problems months later. Also great for tracking metrics.


Other notes
  • data encrypted on HD so when throw out server no privacies worries
  • Google uses perforce (p4) but no developer branches! Means code reviews must work with files on dev machines. They use NFS so anyone can browse anyone else's machine
  • runs on one box! in python
  • uses Google BigTable

Wednesday, June 06, 2007

Replacing the OS

Marc Andreesen once said that "the combination of Java and a Netscape browser would relegate the operating system to its original role as an unimportant collection of slightly buggy device drivers." Pretty funny, considering Microsoft has fifty billion in cash and Java is nowhere to be seen on the desktop.

Yet the idea remains tantalizing. Change the phrase to "JavaScript and a web browser" and we have AJAX. Or Adobe Flex. Or Google Gears.

In fact one could make a thin client platform out of one of these AJAX technologies and then replace the OS with a minimal set of services, like the GEOS operating system. Feasible but highly improbable, at least on the desktop. Yet if some new device appeared, larger than a cell phone but smaller than a laptop, it's a whole new ball game. Instant-on is a feature I would dearly love to have, and it's not going to happen on Windows.

Tuesday, June 05, 2007

Speech to Text coming to cell phones

This video introduces Morpheus's upcoming speech to text technology. The reason this is important is audio bandwidth. The phone network is based on 64 kbps audio of 4 KHz bandwidth, which is fine for humans to understand but is missing a lot of the higher frequencies that speech recognition engines need to improve their accuracy. That's why phone-based speech rec uses discrete grammars, which are simpler to recognize. Desktop speech rec can do full dictation because a high-quality audio path exists.

Morpheus (and others) use network-based speech rec. The user's device captures the audio and does some basic processing before streaming it as data. Network-based speech rec engines receive the data, do the recognition, and send back the recognized text. Not only does this avoid the audio bandwidth problems, it also avoids running the speech rec engine on a CPU-limited cell phone.

This still isn't perfect dictation accuracy. The Morpheus video mentions about a 10% error rate. So it's still not really ready for dictating blog posts from your phone, but the accuracy is tightly tied to CPU power which improves every year.

As the VUI design blog says, the recent acquisitions of BeVocal and TellMe is perhaps being driven by interest in network-based speech.