Choosing an architect is itself a creative process. Paul Finch gives his recipe for a successful outcome
As I mentioned recently, there is a simple way to avoid the perils of academic grade degradation (or inflation). The method, to recap, is to decide what percentage of applicants should get an A grade – let’s say 10 per cent. You do the same with all the other grades. You can make the exams as hard as you like without penalising the 10 per cent. It promotes excellence without penalties.
But what about assessing architectural quality, not in the schools but in the context of selection panels?
Having had a fair amount of involvement in selection over the years, my first strong advice to clients is to avoid design competitions unless they know exactly what they want, have control of the site, and are completely committed to proceeding. If any of these does not apply, it will be a time-waster.
Assuming what the client wants is an architect, and not a specific design, they can invite expression of interests or simply invite practices they admire to take part in an initial ‘beauty contest’. A shortlist can be selected – who can and should be paid to do some design work – before the final selection is made.
What works well once an interview list is decided is an invitation to present (a) the practice and work it may have carried out previously that is relevant to the proposed project (and not necessarily the same building type); (b) any thoughts about the specific challenges and opportunities associated with the project and site; and (c) thoughts about other consultants the architect would envisage as part of their team.
It should be made clear that a specific design is not sought for the first interview; that anyone who brings a model to the interview will be disqualified; and that it is assumed that anyone who makes a detailed design presentation will be assumed to be incapable of understanding client requests.
One hour is generally sufficient time for a practice assessment: 20 minutes of presentation, 20 minutes of question and answer, and 20 minutes after the practice has left for the panel to discuss collectively, then score individually.
Adding up numbers is quantitative, whereas a grading system is qualitative
In respect of assessing quality of practices and presentations, I advocate the following general method:
1. Avoid numerical scoring in favour of grading, where A = Outstanding; B = Very good; C = Good; D= Adequate; E = Poor
2. Use grading sheets with room for comments
3. Ask selectors to (a) complete their individual grading once the presentation is over as described above; and (b) keep a note of their general thoughts about the quality of what they saw and heard
4. Sheets to be handed over to an administrator who can note the combined number of As and Bs awarded for each team.
At the end of the day, and without aggregating the grades in the first instance, ask the panel to discuss who they would like to go through to the next stage, or be appointed. This may result in relative unanimity, at least after the discussion has taken place. Obviously a good chair is a help here.
In the event of an impasse, the administrator can be asked for the aggregated A and B grades. If absolutely necessary (ie things have got really difficult), you can ascribe numerical scores to the grades, where A = 5; B = 3; C = 2; D = 1; E = 0.
Why not apply the numbers in the first place, I hear you ask? It is because adding up numbers is quantitative, whereas a grading system is qualitative. I notice that QS/project manager types love numbers. That is because they are not creative. The purpose of these exercises is to get real architects.