That open-air room set is murder on your attendees’ concentration. your keynoter’s voice is getting lost in the cavernous depths of your theater space. and the music at your cocktail reception is way too loud. but don’t worry — good sound design is as easy as listening to the audiology experts, meeting organizers, and AV professionals we’ve assembled.
Colby Leider, Ph.D., is associate professor and director of the music-engineering technology program at the University of Miami’s Frost School of Music. He teaches courses in environmental acoustics, acoustic ecology, architectural acoustics, loudspeaker design and analysis, and digital-signal processing. When he’s not in the classroom, Leider consults with architects on acoustical treatments for a variety of large venues — including a private, 4,000-square-foot museum project in Nicaragua that he had just started work on when Convene spoke with him in March. We asked Leider to apply his acoustic expertise to the spaces that meetings inhabit.
Why is sound such a problem at many meetings?
Some buildings, when they get beyond a certain size — that is true especially in convention halls and auditoriums — once a certain volume or threshold is exceeded, then maintaining an appropriate intelligibility quotient becomes increasingly difficult. There are about 10 different ways that we can measure or even predict the intelligibility of a space or speech.
What are those measurements a function of? Architectural design? Building materials?
It’s mostly construction and a function of the volume of the room — [measured in] cubic meters or cubic feet, the total surface area of the room. And the most important part is the materials that are on the walls, on the floor, or on the ceiling. Is there, for example, an acoustical tiled ceiling? Is the floor marble or polished concrete? Or is it carpeted?
For all of these different materials, there are tables of absorption coefficients. Let’s say you’re going to have a conference in Beijing at the Marriott. As long as I can get raw numbers of the dimensions of the room, what the walls, the floor, and the ceiling are made of, I can predict through a computer program [whether it meets] our proposed standard for a “Good Housekeeping Seal of Approval” for intelligibility.
How does the number of people in the room and the type of situational environment come into play? A museum, for example, needs to be more hushed than a space holding a social-networking event.
It’s definitely a function of the number of people in the room. Each human body absorbs [sound]. Absorption is measured in units of sabins — named after the first acoustician of record, Wallace Sabine, who was a physicist at Harvard in the early 1890s. The regents of Harvard came to him and said, “We have this museum, Fogg Hall. It sounds horrible. We cannot have a conversation in it.” They [wanted him to] fix it. He said there was no theory about how sounds propagates indoors, and [they instructed him to] develop one. This was his lifelong research — developing these equations as it characterized rooms and essentially speech and intelligibility.
[Those equations have] lasted 115 years now, the science that he started. Each person has and exhibits typically between three and five sabins of absorption. The more people in the room, the more “dead” we call the room; the more reverberant it becomes, and reverberation is definitely the enemy of intelligibility. It helps in terms of having more people in a room as far as absorption. But if they’re all talking, then all bets are off, because they’re generating so much more noise than they’re absorbing.
Crowds of people tend to exhibit what is known in the psychoacoustic literature as the so-called “cocktail-party effect.” The brain has a really good neuroplasticity to adapt to different environments. Try this sometime at a restaurant: See how many different conversations you can hear at the same time. The brain can usually pick out three different conversations. But it becomes very mentally taxing after a while to discern just one. That process of discerning is called “sound-source segregation” or “stream segregation.” It actually makes you physically tired after a while, because you’re causing your brain to work overtime. It’s much better to have a very absorptive room [when there are] a lot of people trying to talk to each other.
These are all aspects that are dependent on the space or the building itself, but not really within a meeting organizer’s control. What options do meeting professionals have to improve sound quality at meetings?
That’s a good question. [We need to develop] a standard that’s agreed upon by an organization, such as the Audio Engineering Society [AES]. This could actually become an AES standard. For example, one of the standards that the AES came up with was the compact disc [CD] back in 1983. They standardized what a compact disc looks like, how many tracks you can fit on it, how long it goes, and all of these sorts of things.
For the rooms that don’t meet the standard that we would propose, there are different kinds of treatments that could be brought in. Those treatments can either be permanent or temporary, such as putting absorber panels up on the wall, and a few other tricks so the standard is met. Or, they would have to hire acousticians to come in and work with an architect to make a permanent fix, which is usually a good investment.
Sound can do one of three things: It either can be absorbed, or it can be reflected, or it can be diffused — scattered like a diamond scatters light. The hardest, the most expensive problem to treat is that of sound isolation, the leakage from one room to another. If you have ever been to a movie theater that is not THX-certified, it’s very likely that you’ll hear bleed-over from the adjacent theater right next to you. But in those [theaters] that have been certified, you generally won’t hear any bleed at all. The reason for that is purely due to the amount of mass separating those walls. Ideally, it would be a concrete wall separating the rooms, or a double-fitted wall. If that isn’t available, then the venue could elect to do a less-expensive and temporary fix, which would be to hang heavy velour curtains, or to actually build another wall — a wall between a wall — which can really help as well.
What about at convention centers where session spaces are separated by air walls?
I would borrow from the technology of the recording studio, [which] has dealt with these kinds of problems for the last 75 or so years. We use a gobo — basically portable walls that are on rollers, about four inches thick. They’re filled with an insulating material like Owens Corning insulation.
So not all temporary fixes are expensive?
No. It would definitely be more expensive if you wanted the venue to find a permanent solution to sound insulation and noise abatements, and increasing the intelligibility. The other thing we haven’t talked about yet is that most venues I’ve seen for conventions do sound reinforcement. That’s the placement of loudspeakers for amplification of [the presenters]. They often place the loudspeakers in the absolute worst position. That is a very simple fix. I have computer software that can be aligned and take a few measurements in a room. We can then show, very quickly, the acoustically best place to place the speakers to maximize coverage for the audience and to make sure everyone can hear what’s being said.
Can you give an example of bad loudspeaker placement?
It’s a very common thing [to place] a left and a right loudspeaker at the very front of a large room. A lot of rooms aren’t capable of supporting inferior playback. If the room is primarily meant for speech and not for music, it makes much more sense to have a single speaker front and center. The other thing that sometimes happens is that the AV personnel — and I’ve seen this happen at different convention centers — will place a podium microphone in the absolute worst spot. If you incorrectly place a podium microphone, or if you use the wrong kind of microphone, which also happens quite a bit, there is an effect called “comb filtering.” The microphone is receiving the direct signal from the human that is speaking. But it’s also picking up, a few microseconds later, a reflection — the first order of reflection of that speech bouncing off the podium itself and then hitting the microphone. Those two signals can sometimes cancel each other out. It actually makes certain syllables and certain consonants diminished in amplitude. It’s like destroying the speech.
What are your thoughts about open-air learning environments, where sound bleed is especially a problem?
That’s a great question. [Relative to this environment] is the open-office plan. Probably the biggest practitioner of open-office plans are startups in the Bay Area — Google and Yahoo! It’s much less expensive. It’s much more social. You can fit more people than you can in a cubicle setup or especially in a closed office with the door kind of setup.
But there is some research that indicates worker productivity in open-office plans can be diminished by as much as 20 percent — owing to acoustic leakage, lack of concentration, increased desire to socialize with the person next to you. It’s difficult to concentrate with hearing everyone else talking all of the time. The most effective thing that can be done is for workers in open-office plans to put on headphones or to put in earplugs. Earplugs that cancel and that diminish the amplitude of sound after about 35 decibels are very inexpensive, maybe a dollar a pair.
Would you recommend that as a remedy for the open-air meeting environment, as well?
That would be the least expensive way to fix both of the problems. The other option [goes back to] gobos. And it would be much better to have [furniture with] material like fiberglass or wool; something that absorbs sound. It’s going to help a lot.
Is part of the problem that many adults — young and old alike — are experiencing hearing loss?
[Hearing loss] is definitely going to become an increasing problem. Just walking around in Miami’s restaurants and nightclubs, [I’ve heard how] federal noise guidelines are, on a daily basis, being trampled on. The Occupational Safety and Health Administration, OSHA, has a set of noise guidelines. If you just Google “OSHA noise exposure,” they dictate what is a safe decibel level for a human being to listen to and for how long.
For example, you can listen to 85 decibels, which is pretty loud. That’s about as loud as you would want your stereo in your home to play. But if you listen to that for eight hours without interruption, it will cause damage. I was at a friend’s birthday party at a nightclub recently in South Beach. I was measuring the decibel level at a continuous 105 decibels, which after about one or two hours caused irreversible, permanent hearing damage in the 30- and 20-year-olds that were in there dancing.
I guess the point is if that’s going to become a fact of our population — that we are routinely damaging our hearing as a species — then I think it’s going to make the need for careful placement of microphones and loudspeakers for amplification of the human voice that much more important.
— Michelle Russell
Audio engineer Steve Bush on identifying sound leakage, controlling reverberation, and finding the right expert
Since its founding in 1979, Berkeley, California–based Meyer Sound has “been devoted to meeting the needs of sound-reinforcement professionals,” not only with products but through high-level technical education, according to the company’s website. Helping to realize that mission is Steve Bush, Meyer Sound’s senior technical support specialist and instructor in its audio education program. Company founder John Meyer, according to Bush, has made a name for himself by creating “unique and game-changing” products “that solve problems,” including pioneering a self-powered loudspeaker technology used at many live-concert venues today. The company has grown from producing loudspeakers to working on acoustic treatments and digital processing.
Convene asked Bush to draw on his 10 years of experience at Meyer Sound, as well as 20 years prior to that as a self-described “sound guy” who spent a lot of time in convention centers.
What can meeting planners do to improve intelligibility and sound in spaces divided by air walls?
It can be expensive, but if you know that [you] are in a specific convention center or even in a hotel where you’re dealing with the grand ballroom that divides into three, and you know that the air walls bleed, you can buy out the other sections. If you’re in Hall B, buy Hall A and Hall C as well, or try to schedule around competing events. Sound isolation is difficult in these types of rooms. If there are air leaks — and there are usually large gaps at the ends of the air walls and small ones between the panels — there will be sound leakage.
How can planners assess whether there will be sound issues in rooms they are considering for events?
There are some basic tests. If you go into several rooms and just simply say “hey” loudly, and then clap your hands once, then have somebody go to the other end of the room and do the same. Listen closely to what happens right after the initial sound. You can start to build this library of how rooms sound after you put speakers in them and it sounds good, and how rooms sound after you put speakers in them and it sounds bad.
There are two characteristics to listen for. There are these echoes — “hey, hey, hey,” like in a canyon — or there is reverberation, which is the continuation of a lot of little echoes so close together that you don’t actually hear them as individuals. That’s all reverberation is — a large number of very closely spaced echoes — but our brain doesn’t hear them as independent things. It just turns into a long decay, like the sound in a cathedral. Long echoes are really problematic for speech, hearing everything twice or more. Reverberation is problematic for intelligibility as well, muddling speech.
There’s an understanding of what happens in a room with reverberation and echoes, and when they become detrimental to speech intelligibility. As soon as we put loudspeakers in a room, [they] can really help if they’re directional enough not to create a lot of sound at the surfaces, at the walls and ceiling. So even in a problematic room, if a loudspeaker can just make noise for the audience — and, just as importantly, not make noise where the walls are — you would be less concerned with the echoes and reverberation.
There are some different approaches to pull this off. We can distribute loudspeakers through an audience area, either under seats or pendant-mount them from the ceiling like in a cathedral, and get the speakers closer to the listeners so they don’t have to be very loud, and they make less energy at the surfaces — less reverberation.
We have directional speakers that can be electronically steered. There are small-line arrays, there are point-source speakers — and as I say to my classes all the time, it’s really a matter of picking the right speaker, putting it in the right places, and pointing it in the right direction. Most of the time, speakers on stands are not ideal. Suspending them from a truss or the ceiling and pointing them down onto the audience reduces the amount of sound likely to arrive at a surface, which reduces reflections and echoes.
Changing the reflective surfaces can be beneficial. It increases voice intelligibility. Heavy theatrical drapery is an option for temporary use. It can be hung along a wall from a piece of truss. If you can cover up even half of the wall with a length of drape that’s twice the length of the portion of wall being covered, and stand it off the wall by about a foot, you can decrease the reverberation and echoes in a room.
Is the science of sound something most hotel AV companies are knowledgeable about, in your experience?
There’s a lot of variation in quality out there. Some sound folks are more interested in mixing — more the art side of sound than the science side of sound, so they’re less interested in where they put the speakers, what speakers they’re using, or how to get those speakers to work well together. Additionally, hotel AV companies don’t always have a good loudspeaker inventory to pull from that matches the needs of their rooms.
Sometimes it’s good to hire [an outside] consultant or a technical director who understands the technical principles behind these issues. I used to get hired to come in and do big [conferences]. As the master audio guy, I never stepped behind a console. My job was to make sure that the right systems were specified and implemented well and [that] the operating staff had all the information they needed.
How would a meeting organizer find such an audio expert?
Unfortunately, there’s no reliable database of qualified audio professionals. So by title, you may be looking for a system technician, most likely from the concert world. Most of these people are freelancers. Look for referrals or use LinkedIn and Facebook.
It’s a little bit of a challenge, because most venues present themselves as having sufficient in-house sound expertise.
An audio representative working for the event, not the “house,” should be involved early on in the project to do a site survey well before load-in. Most of the time, a good, experienced sound crew knows what the solutions are, but by the time load-in has started, it’s too late to implement anything that would substantially improve the outcome.
With the right equipment and the right kind of acoustic environment, these professionals can do magic. If the equipment is lacking or the acoustic space is not conducive, the sound technicians are dealing [with a losing proposition]. It’s really frustrating for everyone when half the room can’t understand what the keynote speaker is saying — more so for the people who are responsible for making it sound good.
— Michelle Russell
Cognitive scientist Josh McDermott on why people hear what they do — and why the auditory conferences he attends often have terrible acoustics
While the onus for successful sound at meetings often falls on planners’ shoulders, are there ways for attendees to individually filter out noises they don’t want to hear? We asked Josh McDermott, Ph.D., an assistant professor in MIT’s Department of Brain and Cognitive Sciences who oversees the school’s Laboratory for Computational Audition. McDermott’s research focuses on how we learn to instantly recognize certain sounds — and why machines can’t — as well as how we segregate sounds and perceive reverberation.
Can you explain reverberation, and some of your studies pertaining to it?
The sound that enters our ears originates from an outer source, but that source can interact with the environment on the way to our ears and enter the ears as a very distorted version of the original source. We’re interested in how it is that people are able to partially separate the effects of the source and the effects of the environment, and we study that by combining computational models of what we think the auditory system is doing with experiments on people to test how people hear. We also spend a lot of time looking into the brains of people while they’re listening to sounds to try to understand the neural representations that are produced by the sound.
Are there techniques or technologies that listeners can use on an individual basis to segregate the sounds that they want to hear from those that they don’t?
That’s a subject that I’m deeply interested in, but those technologies mostly don’t exist yet. I’d like to think that in another 10 or 20 years, we’ll have applications on our cellphones that will essentially process sound for us and help us filter out the stuff that we really don’t want to listen to. As of now, that problem is largely unsolved.
One thing that does exist are directional microphones that will filter out sounds coming from directions that you’re not interested in. In principle, they’re something that you can equip a person with and that might help; [a listener] kind of points their head toward the source that they’re interested in, and that will be amplified relative to everything else. But it’s not a product that’s actually available for us.
I’ve also often thought that it might be kind of useful to create a kind of ear prism, little tubes going into your ears that would allow you to point an artificial ear toward the thing that you’re interested in while enabling you to continue to look at [the subject]. I think that would be a cool thing to actually try.
Since you’ve attended and presented at meetings devoted to sound and hearing, are there any techniques you’ve seen used at these meetings to cut reverberation and control sound?
I’m always amazed at how bad sound is at auditory meetings. You’d think that it wouldn’t be, but meetings are organized by professional meeting organizers, so the fact that [we’re attending] an auditory meeting is irrelevant. It ends up being random whether the terms are good or not.
I’m amazed at how much reverberation varies from space to space. I’ve given talks in concert halls where the reverberation makes it almost impossible to even present sound demos — you can’t even play them because the reverberation so profoundly alters the sound. In other comparably sized rooms, it will work great. So I think the way that the walls are treated is probably the biggest factor, and that’s usually beyond the control of the people that are coming to the meeting. The actual scientists that go to the meetings, they just don’t have a whole lot of direct control over all of these things.
I think things could be a lot better. The people who are organizing [meetings] are not the people attending them — so they end up not being invested in a detailed way in how things go. It’s possible to manage the reverb, but it’s just usually not the case. There are also some interesting signal-processing tricks that people have looked into for counteracting the effects of reverb, but they’re not widely used at this point.
What kind of tricks?
Reverb has the effect of blurring sound out, giving you these delayed copies of the sound, all these sorts of reflections that arise at later times. [Reverb] causes the sound to lose its resolution in some sense. So the trick would be that if you can break the sound up into little pieces so that the individual pieces are “shorter,” then the blurring causes the pieces to interfere with each other less than normally. So it would be possible to take a speech signal and reproduce it in a way that makes the thing that actually enters the person’s ear a little bit closer to the actual, original sound. This is not something that’s in widespread use, but within 10 years it could be.
With ever-more staticky environments around us, do you think people are evolving in terms of how they perceive and process sound?
That’s something I’m really interested in, and I think we don’t really know the answer to that. It’s a deep and unresolved issue in neuroscience as to what extent our abilities are due to what we’re born with and to what extent they’re learned and adapted to the particular environments in which we live. Certainly, what’s very clear is that modern industrialized life is much noisier than life was pre-machinery. Once, you walked around in the forest and things were pretty quiet. Now you walk around the city, and the decibel levels are much higher. I think it’s an open question as to whether people have adapted to that, or whether our struggles — such as when we’re in a restaurant or a nightclub — are due to the fact that our auditory system evolved during situations when the world was a bit quieter.
I’ve worked in a few noisy newsrooms, and there is often so much noise that it’s hard to concentrate. Over time, though, some journalists can drown out background noise so they can focus on writing. I don’t know what mechanism makes people able to do that.
There’s probably two effects that work in opposition, which is that it’s plausible and likely that people can learn to get better at most things that they practice. But there’s this other factor, which is that when you work in a noisy environment, you end up suffering from hearing impairment. For instance, at a construction site, most of the workers probably end up with hearing loss. It would be interesting to look at environments where the overall dB [decibel] level is not that high, like a newsroom, where there’s just lots of sound sources, so filtering ends up being important.
Do you have any other thoughts on manipulating sound at meetings or in noisy environments?
I think that within the next 10 to 20 years, there will be things along the lines of personalized hearing aids, tools that will be in smartphones, where somebody will be able to basically say, “Yeah, I only want to hear the person who’s talking,” and that will come through their earphones. Things will really explode in terms of sort of personal assistance from devices.
— Corin Hirsch
C2MTL stages its boldly designed program in large, open-air spaces — and through trial and error has learned how to calibrate its sound.
From its beginning four years ago, Montreal’s C2MTL conference, combining Commerce + Creativity, intended to be more than just a little bit different. A partnership between Sid Lee creative-services firm and Cirque Du Soleil — both based in Montreal — the event not only brings in some of the world’s most innovative thinkers and business leaders, but also takes place in imaginative, immersive environments. Organizers emphasize sensory experience and experimentation — last year, attendees climbed into a pool filled with small plastic balls — as well as connecting participants to one another to generate ideas.
But even the boldest meeting planners encounter boundaries that can be pushed only so far — such as the physical limits of sound. That was a lesson that C2MTL learned the hard way.
‘A LOT OF OPEN SPACES’
At the first conference, held in 2011, organizers attempted to stage four workshops in one “large space with really long tables,” said Nadia Lakhdari, C2MTL’s vice president of content and creation. “It ended up being a disaster, because no one could hear the instructions that the workshop leaders were giving.” In an attempt to be heard, workshop leaders climbed onto the tops of tables and turned up the volume. “They only managed to drown each other out — making the problem worse rather than helping it.”
As a result, C2MTL rethought how it used space. “One of the big learning curves we’ve had with noise is that not every activity can take place anywhere on the site,” Lakhdari said. “It’s something we give a lot of thought to, and we make sure that our expectations for what is going to take place match with the noise level that is predicted to be in a certain space.
“Because we work with a lot of open spaces, we pay a lot of attention to our often-competing noises and how these things are organized during the day so that they don’t overlap each other,” she said. “And if there is a need for one item to really be noisy, then we will suspend activity in neighboring areas for that time so that no one is annoyed by the neighboring noise, and is stopped from doing what they thought they were there to do.”
In the case of workshops, they’re structured so that at no point does a leader have to address a group of, say, 20 people at once in a shared space. “Maybe instead, the leader will go from group to group and talk to people one-on-one,” Lakhdari said. “There are ways around this, but don’t try and pretend that you can hold a group activity in a noisy place, because it just won’t happen. A group activity led by one person and where you expect all the group to hear — it just won’t work. But other deconstructed activities or networking activities will work very well in a noisy place.”
Adding to the challenge is the fact that for the last three years, C2MTL has taken place at the Arsenal, a huge, redbrick building that originally was part of a 19th-century shipyard. “It’s really an acoustics nightmare,” Lakhdari said. “In its raw state, if it’s empty, it’s very echoey. And so we work a lot with fabrics and different textures and different structures to try and create areas that will absorb noise a little bit.”
But those efforts always have to be counterbalanced with fire regulations. “There is a limit,” Lakhdari said. “If all you were focused on were acoustics, you drape a lot of thick fabric all over the place to create little bubbles. But that can contravene the fire code for different buildings. And so it’s always something where it’s not only budget that will limit you in making a place more acoustically friendly.”
IN THE MIX
C2MTL begins with a very good sound system and a very qualified person both to operate it and to design its placement. “Just something as simple as in which direction do you orient a speaker, for example, will have a huge impact on sound in surrounding areas,” Lakhdari said. “So that design needs to have been done extremely well, and that tends to be expensive, because you need to hire good equipment and hire the right people to design the use.”
The mix of high-tech equipment and low-tech sound absorbers, such as fabric, walls, and paper, is a collaboration between C2MTL’s sound and set designers, who consider both utility and aesthetics, Lakhdari said. And the organizers themselves also try to strike a balance on sound. “If you’re somewhere that is just too noisy and you can’t hear each other speak, then it won’t be an enjoyable experience — especially at a conference where networking is such a fundamental part of what you’re there to do,” Lakhdari said. “But at the same time, if you walked into a space that [is so] quiet, you felt almost embarrassed to make noise, then that wouldn’t help your experience. So I think it’s that happy medium. And perhaps in the evening, it’s a little bit louder because people are used to a little bit more music and sound.”
— Barbara Palmer
Scene and Heard
There leading AV providers offer best practices for fine-tuning the sound mix at your meetings and events.
1. Arrange speakers carefully
No, not your keynoters and other presenters — your audio speakers. “The placement of the speakers in the environment is very important,” said Jim Russell, executive vice president of sales for Freeman. “Sometimes the design elements of the room can take precedence over the sound, but it all needs to be done in conjunction for the best possible outcome.”
2. Read the room
“Don’t think of audio as an easy piece of your set design — it has to be considered upfront, ideally at the time of site inspection,” Russell said. “Larger sessions, especially, can be a challenge.” There’s a pervasive myth that sound engineers can fix anything given the right equipment, and that’s true sometimes — but not all the time. “If there’s an echo in the room,” Russell said, “there will be an echo during a
3. Remember, it’s called audiovisual
Ensure well ahead of the event that both audio and visual elements are in sync, to maximize your audience’s retention of meeting content. “If a person hears a message and looks at a related graphic,” said Mark Consiglio, product manager for audio and IT at PSAV, “he or she absorbs an additional 91 percent of the message.”
Michael Bogden, a sound designer and mixer at Visual Horizon Communications, agrees that coordinating audio and visual ahead of time is mission-critical. “Half of the content that the audience is getting in the meeting is what they’re seeing on the screen, whether it’s video or a PowerPoint presentation,” Bogden said. “If they can’t experience what’s on the screen, then the purpose of having that meeting is lost.”
4. Talk to your creative partners
Collaboration among your sound engineers, set designers, and the team in charge of visuals is also critical for the best audience experience. “Even though it might be the optimal speaker position, I can’t hang a speaker array in front of a screen, and if there’s an interesting scene-piece that plays a big part in the show, I can’t hang in front of that either,” Bogden said. “What I like to do, from the get-go, is sit down with everyone involved to find ways that I can design my speaker system into a piece of scenery, or to hang between two screens. The key is early and constant collaboration with everyone involved.”
5. Get the early-bird special
Budget is a huge driving force in sound design because it determines the size of your sound system and what equipment will be used on site. When you bring your sound team into the fold, they can offer suggestions and advice about making the most out of budgets of any size. Consiglio also recommends setting aside money for recording and livestreaming. “Don’t skimp on the labor or equipment recommended by your audio specialist,” he said. “We have spent a lot of time and energy determining the best way to ensure a successful event recording.”
6. Use the buddy system
On a related note, think of your sound provider as a partner, not a supplier. “We can help provide suggestions that would make a scenario that much better, but we can’t do that if we’re not at the table,” Bogden said. “Anybody can supply gear, but not everyone can be a partner. I think it’s important to be a really good partner.”
7. Understand the importance of silent sound
Bogden is frustrated when conference organizers treat audio as less of a priority. “I think there’s an attitude that ‘It’s just sound, it can just be okay,’” Bogden said. “The reality is that when sound is good it goes unnoticed, and when it’s not people notice right away.”
8. Don’t automatically go wireless
Consiglio cautions against putting all your faith in wireless microphones for speakers — your sound provider may have a better suggestion for your particular meeting space or room setup. “One of the biggest misconceptions when dealing with sound design in the meeting space is that wireless microphones are 100-percent infallible, which is not the case at all,” Consiglio said. “Wireless microphones have a lot of variables, like interference and battery life, that can bring a meeting to its knees. In some cases, a wired microphone may be recommended, and that could be because the representative knows from experience that wireless mics could be an issue in that space.”
9. Consider what not to wear
It may seem like a minor detail, but presenters’ clothing can be a roadblock when it comes to wireless microphones, according to Bogden. Sweaters, dresses, and even earrings can make it difficult for the sound team to place microphones for optimal sound quality. Ask for advice early on, so you can brief your speakers well in advance of the meeting or request special equipment, such as necklace microphones. According to Consiglio, such specialty devices are perfect for speakers without a lapel or button-down shirt to connect a traditional lavaliere to. “They can be tucked under a scarf, a piece of jewelry, or even a collar on a dress shirt,” Consiglio said.
10. Go small or go home
“Everyone wants to try something new,” Bogden said. “They always want to experiment with something, whether it’s a new technology, a new piece of gear, a new application for a certain type of speaker, or a new design idea. The best piece of advice I got from a college professor was ‘Do that in very small steps.’ If you’re at a meeting and you want to try something new, just try one thing. Don’t do too many things at once, because you’ll increase the opportunity for failure. It would be a bad idea to try new microphones on all the presenters. If it doesn’t work, and the sound isn’t right, that’s a big deal.”
— Kate Mulcrone