Friday 17 August 2012

Sensor system (machine vision) development team organization and management.


A note on the organization of teams and processes for the development of automotive industry machine vision systems.

Sensor systems (particularly machine vision systems) operating in unconstrained environments pose significant design and engineering challenges. It is, however, possible to design these, and not necessary at exorbitant cost, either, but to do so means adopting some slightly unusual and highly, highly disciplined engineering practices.

First of all, the fact that the sensor system is to operate in an unconstrained environment with a very large number of free variables necessarily means that there will be lots of "black swan" / "long-tail" type problems that are not known in advance: the bird that flies in formation with the car for a short while, the van carrying a mirror on it's side; the torrential downpour limiting visibility; the train moving alongside the road; that sort of thing.

As a consequence, you need moderate-to-large volumes of pathological test data, together with the associated machinery and automation that will allow candidate system configurations to pit their wits against the worst that the test system can throw at them. 

The problem also requires that we get the basics right early on, so that the team can focus on the real problem without tripping themselves up with regressions and other problems (all to easy to do). Whilst solutions can be relatively simple (indeed they should be as simple as possible), the amount of work required to cover the basics is large, and no one person can make sufficient progress on their own. To succeed requires both teamwork and a level of focus, organization and automation that is not commonly found in an academic environment. It also requires that we master the tricky balance between discipline at the team level, and freedom to innovate at the individual level.

It is important that we pay attention to the organization of the system. We need to think about it for a little bit, and pick a good one, but move quickly and avoid bike-shedding. A wide range of organizations can be made to work. (Here is one I like). The organization of the system must be reflected in the organization of the team, and the organization of the filing system. The filing system would (ideally) be put in the version control system / repository, and the log used as a management tool to monitor progress. The important thing is that the organization is agreed-upon, and reflects the organization of the capabilities or systems being developed.

Having picked a good organizational structure, the next thing to focus on is total quality through test automation. 

Weekly meetings, daily standups, documentation & wikis etc... are all good ideas, but they are much less important than AUTOMATION, which IS ABSOLUTELY CRITICAL to success. This is why: humans are liars. We cannot help it. When we talk to other people, especially our superiors, we try to present ourselves in the best light. We spin the situation and use weasel words to make ourselves look better than we are. Honesty is crucially important for the team to make decisions. The only way to achieve honesty is to take reporting out of the hands of human beings.

You will need management buy-in for this. Use financial audit as an analogy. We audit the company books so investors can have some degree of faith in them, so why not also audit design progress?

First of all, you will need some hardware. Start with one server, running the repository and a simple post-commit hook script. (Ideally also stored in the repository) :-). Continuous Integration software like Hudson is not necessary (and perhaps not desirable). Better results can often be achieved with a Python, MATLAB or Go script (or some such) to run tests and measure performance.

The post commit hook script is the main way that we are going to feed-back performance to managers, so get it to save results to a file, and have a daily process summarize the contents and email the summary once a day to the management team. Do this even (especially) if the results file says "No progress measured today".

Initially, the management team might not want to read it, but get them to at least pretend that they do, so that there is at least the semblance of oversight. If they want more information, get them to ask for it to be included in the daily development status email. Try to encourage the view that "if it is not in the automated report, it is not done"

:-)

Bringing progress reporting and test automation together I think could be a powerful tool to help support and encourage transparent and professional development.

Manual reporting and communication attempts, such as documentation, manual progress reports, wikis and meetings have a smaller impact on success than tooling and automation.

It is about building a design production line for the organization.

No comments:

Post a Comment