When i’m not a Sysadmin at work, i’m two thirds of our Stream Team, an
informal group of two which is sometimes called upon when we have events of
moderate complexity that need streaming. Another colleague of mine is the wizard
of sound, when sound wizardry is required.
Doing live multicamera video production is challenging, a bit unnerving, and
fun. There are a whole bunch of things to think of.
First, a colleague pings me and asks whether it was me doing these streaming
things and whether i’d be available for this and that date (which used to be
“all too soon” but lately i’ve been given much ampler heads-up). I rent some
cameras and lights and make sure they get to work. I communicate with the
“client” so it’s clear to all involved what kind of an event it is, who’s going
to show up, if anybody’s going to be joining by remote connection, is the
audience local, remote or both, if there are slides or other presentations, if
those presentations include sound. I instruct that if at all possible, those
slides should have a font size that’s big enough to read on a small screen, and
if they could please leave 1/3 free in the right margin so i can mix the talking
head onto the picture.
Then it’s setting up the “studio”, which is a meeting space at our top floor.
This involves moving seats and tables for the presenters – depending on what
kind of an event it is – and checking out that it looks nice on camera.
Sometimes, i put some plants in the background, for example. I set up lights
(two in the front, two from the back) high up and cameras at eye level and
adjust everything so that the chairs and the people on them will look more or
less the same size. And then i place the microphones.
I have an ATEM mini extreme ISO for video mixing and a Behringer Flow 8 for
sound mixing. Last Friday, i actually got all the video inputs on the ATEM
plugged, which is a first for me. Actually, this was my most complex production
yet, and it went … alright, though not without hitches. Inputs 1, 2 and 3 were
for the cameras: two Sony 6300 series cameras and one BlackMagic Pocket cinema
camera 4k. Input 4 was for the remote presenters joining by Zoom. Input 5 was
for the presenter slides and Input 6 for the remote presenters' slides. With
Zoom, you can get Dual Monitor mode: the remote presenter in one full screen,
their slides in another full screen, and your controls on your laptop screen.
Very convenient. Input 7 was for the Google Meet screen this was streamed to,
in case somebody on the Meet wanted to ask the presenters something in person,
but in the end, this became too messy to do and we took questions by text.
Finally, Input 8 was for H2R Graphics which is a
software package for overlay texts. You can get it for the low low price of 0 €
(and your soul), or you can choose to pay $80 for a pro license. Which i very
well might one day, because John Barker is an excellent feller who does great
instructional videos. But
that’s another story. H2R Graphics can also output to separate fill and key
screens so that you can get gradual transparency on your overlay graphics. But
as i mentioned, i was spec’d out.
The event was streamed to Google Meet and YouTube, and recorded separately on
SSD in case the connection went bonkers. The ATEM mini can output the mix over
USB-C which presents itself as a webcam to the “Meet computer”, and it has
YouTube streaming and a USB port for a disk built right in. I wish i had the
skillz and capability to also record the sound in multitrack, but i don’t think
i have the gear or brains for that.
Audio tends to be easier. Plug in your microphones to the Flow 8. It can handle
up to four microphones, of which up to two can be condendser mics. If we have a
quiet environment, i prefer the condenser mics because they are smaller and i
can have them a bit farther from the speaker’s face. But this time, we also had
the zoom connection, which came in over USB, and a talk-back channel from Meet
back into the mixer. Furthermore, i had a set of speakers in the studio so that
the local presenters could hear the remote presenters and the audience. This
turned out less successful than i had wished.
It is, technically, possible to create several independent mixes on the Flow 8.
One mix, presumably your “Main” mix, goes into the video mixer and from there on
to Meet, Zoom, YouTube and SSD. Another mix is a “Monitor” mix, which should
contain everything except the microphone signal — to prevent feedback in the
studio. And in an ideal world, you should not be sending the Zoom sound back to
Zoom, nor the Meet sound back to Meet. Thankfully, echo cancellation is so good
these days that you can let the software take care of it.
In the end, sound was the biggest problem. I could not get the studio monitor
mix to work as it should. There was feedback. The televisions in the studio
would also emit sound, from the presenters, which made into a terrible “slap
back” echo effect which is horrible to be in. I solved it with turning off the
telly behind the presenters, because the only other way to control sound on it
was by remote control and it had flat batteries. In the end, i plugged
the headphone output on the sound mixer to the audio line input on the video
mixer because that was the only way i could get reliable sound with the cables i
had (which was one too little, as it always seems). Luck comes to the prepared
and i just wasn’t prepared enough.
Thankfully, everything didn’t go apes. The presentation was quite a success,
both the audience and the presenters were happy (or at least not visibly
annoyed), the picture and sound was more than passable, the transitions were
smooth enough and we only had one severe network glitch which might not be
visible on the YouTube feed, since the ATEM should have enough caching to handle
those sort of hiccups.
I have another presentation of equal complexity Tuesday. Let’s see if i can get
things running just a bit smoother then :)