What we got
The processing of the questionnaires showed that while the overall positive evaluation of the concept and interest in the exhibition was very high, the respondents differed in their evaluation of individual topics. There were almost no "failed" items: what did not inspire some people, others liked. But at the same time, there were very few items that enjoyed 100 per cent support from the audience. Among the latter were subjects from the history of travel (Soviet tourist albums, travelling wagons, guidebooks, postcards, "special offers" of the 19th century, Soviet tourist advertising, the evolution of the suitcase and backpack), topics related to the pragmatics of travel (typical lists of things to take on the road in different eras, staged tourist photography in different eras), participation zones (a map of the world on which visitors are invited to mark with a flag the place - country, city - in which they dream of visiting).
Thus, the testing showed that the concept was interesting to the audience, resonated with them and could be implemented in practice. This was partly due to the diversity in the coverage of the world of travel, allowing different groups of potential viewers to be reached. Redundancy was worth retaining.
The processing of the results of the auditorium evaluation on four scales provided additional information:
- the majority of respondents rated the "Road" hall significantly lower than other sections of the exhibition on all scales,
- on the scale "arouses interest" the majority of respondents rated the halls with the maximum score: "Another World" (90% of respondents), "Choosing a Direction" (80%), "Gatherings" (70%), "Memories" (62%),
- on the scales "simplicity of perception", "touches me personally", "interactivity, desire to participate" all the halls scored approximately the same number of points (45-65% of respondents gave 3 points out of three, 30-40% - 2 points). Against this background, only the "Gatherings" hall, which turned out to be the most understandable and easy to perceive (85% of respondents gave 3 points), stood out slightly.
Final discussion questions for section 3.11. What is more important: the quality of the final concept or the experience of the audience involved?It seems that each organisation answers this question in its own way. And ideally these two factors should be linked. People are interested not just in participating, but in being co-authors of large, interesting and socially significant projects. Nevertheless, these are two different aspects of Museum 2.0 that require a separate discussion.
Since the approach to the culture of participation in our case came from the side of interpretation and development of the actual concept of the exhibition, here we will focus on those aspects that affect the quality of the product that became the goal of this interaction (including people without such a significant [social, cultural] goal is meaningless and unnecessary neither for you nor for them).
We will discuss the second part of the question in detail in section 3.10.
2. How to get an idea to the full 100?The creativity of the group, the possibility to go beyond stereotypes, to get non-banal ideas and suggestions - all these are certainly relevant issues in such co-authored stories. Where are the guarantees that the visitor, having some kind of museum experience, will be able to overcome their own perceptions (frameworks) and create something truly new? This brings us to more general questions about the potential of collective authorship in culture; about what is to be gained from the audience at all; about the impact on the quality of the result; and about the representativeness of co-authors.
We don't yet have a definite answer to all of these, but we do have a selection of cases that demonstrate how to improve the quality of the results of an engaged audience.
In order to improve the quality of the product (text for an advertising booklet), the Zodchie house of culture offered the contest participants a free professional masterclass on the topic "The Word as a Sales Manager".
As part of the creation of the S.H.O.E.S. exhibition, participants were asked to write a text for one part of the exhibition. Along with this, they were offered tips and ideas for writing texts from the writer Ronald Snijders.
Sberbank's crowdsourcing experience has clarified a number of aspects related to the effectiveness of collective development. More than 120,000 people from 78 countries took part in the Sberbank-21 project initiated by Sberbank in 2011. Of these, 15,000 actually suggested something. The proposals were selected on four platforms: the bank's own website (http://sberbank21.ru), the Professionals.ru social network, the WikiVote! platform, and the witology website.
What is interesting for us: different platforms showed different effectiveness.
"...The discussion on Professionals.ru, which had 15,000 active participants, was more like a book of complaints from customers. And the solutions that got the most votes were not characterised by depth and originality" [4].
The platform http://sberbank21.ru was supervised by a professional facilitator - a specialist in discussion management. He formulated tasks and helped participants to organise their work more effectively. The bank did not have its own facilitators, so they engaged employees of a third-party company (the site operator). The facilitator was not an expert in the topic under discussion; his role was similar to that of a talk show host - to manage the discussion, maintain interest in the topic, and quell conflicts. It was the job of the bank's employees to control the discussion at the substantive level.
The most productive was the discussion on the Witology website, where innovative technology was used to select opinions, filter information and select the best participants. As a result, the participants were able to cope with a rather serious and time-consuming task - to develop a concept of a future branch for small businesses online and for private clients - online and offline.
For this purpose, a serious selection process was first carried out among those wishing to take part in the project: they had to pass a test, which was passed by 500 out of 5,000 willing participants.
The competitive atmosphere was also key to the further realisation of the project. "At each stage, a participant could increase his weight if his proposal was approved by other participants. In the first stage, the services and functions of the office were selected. Then we chose the tasks to be solved. Then came the search for solutions, gluing them together. And finally, they were put on the exchange. At the final stage, the participants assessed the risks of the solutions that won on the exchange, and the best one was selected. Out of 500 people who passed the selection, 150 turned out to be active, 50 were productive, and at the end 15 people made up a powerful friendly team" [5].
If this strategy is transferred to the design of museum exhibitions, then what criteria can be used to select "productive" participants and what can be the criteria and scale for evaluating the proposed solutions? And is it possible in principle to create an exhibition concept in this way?
In any case, this case study is a good confirmation of the thesis that quality at the output is largely quality at the input, a consequence of how clearly the objectives, questions and framework of the process are articulated:
- a framework and a scheme of work increases productive thinking and helps participants get more satisfaction from their work,
- the framework is a constraint on your mission and target audience (the brief sets the framework),
- a framework is honesty to the audience in the sense that, with limited resources, the authors of the project can realise something specific (by the way, one of the reasons why crowdsourcing is pointless in many areas is the lack of political will to bring people's ideas to life).
But creating content is only part of the job. It means no less to include the viewer in the selection, evaluation and filtering of ideas, and decision-making. For example, two Moscow crowdsourcing projects in the sphere of urban environment development "What Moscow Wants" and "Active Citizen" successfully complement each other. While the former is focused on the search for fundamentally new ideas and projects, the latter is based on citizens' voting on issues of urban life.