[00:00:00] Nathan Wrigley: Welcome to the Jukebox podcast from WP Tavern. My name is Nathan Wrigley.
Jukebox is a podcast which is dedicated to all things, WordPress. The people, the events, the plugins, the blocks, the themes, and in this case, making the web a better place for those who are deaf.
If you’d like to subscribe to the podcast, you can do that by searching for WP Tavern in your podcast player of choice, or by going to wptavern.com/feed/podcast. And you can copy that URL into most podcast players.
If you have a topic that you’d like us to feature on the podcast, I’m keen to hear from you and hopefully get you, or your idea, featured on the show. Head to wptavern.com/contact/jukebox, and use the form there.
So on the podcast today, we have Elena Panciera and Chiara Pennetta.
Elena is a freelance consultant specializing in inclusive and accessible languages. She champions, linguistic inclusivity, advocating for simplifying language to aid understanding for non-native speakers across all languages. She believes in providing tools to make text more comprehensible for everyone.
Chiara has been a special needs educator for the past two years. Deaf since infancy, Chiara underwent cochlear implant surgery four years ago, significantly improving her hearing. This transformative experience deepened, and altered, her connection to her deaf identity, and spurred her to explore issues of deafness and accessibility.
They’re joining us to discuss the important topic of accessibility online. Accessibility is often an overlooked aspect of web development and event planning, and there are significant challenges and opportunities in making content accessible to diverse audiences.
Elena and Chiara walk us through their personal journeys and professional insights, shedding light on the varying needs within the deaf community.
Chiara shares her experiences of navigating a world that is increasingly leaning towards video content. And how platforms like TikTok, Instagram, and YouTube have improved accessibility through automatic captioning.
Elena highlights the importance of these features, not only for deaf individuals, but also for a broader audience that benefits from reading captions.
We talk about the implications of the European Accessibility Act, set to revolutionize accessibility requirements for websites. What will this mean for developers and content creators? Are captions enough, or should we aim for sign language interpretations as well, despite their complexity and cost?
We also talk about the essential principles of simplification in transcription, and our AI tools are shaping how we deliver accessible content. Chiara emphasizes the diverse needs of individuals with hearing impairments, and why aligning transcriptions with the original content is crucial.
Towards the end, we explore technologies which are improving communication for the deaf community, and the practical steps WordPress events can take to be more inclusive.
Whether you’re a web developer, event planner, or just passionate about accessibility, this episode is for you.
If you’re interested in finding out more, you can find all of the links in the show notes by heading over to wptavern.com/podcast, where you’ll find all the other episodes as well.
And so without further delay, I bring you Elena Panciera and Chiara Pennetta.
I am joined on the podcast today by Elena Panciera and Chiara Pannetta. Very nice to have you both with me.
[00:04:10] Elena Panciera: Thank you.
[00:04:11] Nathan Wrigley: We’re in WordCamp Europe. It is Saturday, so it’s the final day of the conference, and I’m talking to these two fine people about a topic that they spoke about. It was yesterday I think, and so we’re going to get into that topic in a moment. And it’s a topic that, I confess, I don’t know very much about. So hopefully they’re going to tell me, and educate me a lot about it.
But before we do that, one at a time, could I ask you just to introduce yourselves. Tell us a little bit about who you are, where you come from, where you live, what you do for a living, that kind of thing. So let’s go.
[00:04:42] Elena Panciera: Hi, I’m Elena Panciera, and I’m a freelance consultant. I’m expert about inclusive and accessible languages. And, well, I work as a freelancer, so I’m a trainer, I make consultancies about accessibility, and about how to be kind and respectful with language. This is what I do.
[00:05:05] Nathan Wrigley: Thank you.
[00:05:06] Chiara Pennetta: Hello, my name is Chiara Pennetta. I am 30 years old from Italy. I work as a teacher in special needs education, since two years ago. I studied Ancient Greek and Latin, and then I specialised in teaching Italian as a second language. But four years ago, I got two cochlear implants because I am deaf since I was one and a half years ago. Five years ago in fact, I chose to try and improve my hearing with this surgery, which is called cochlear implant.
In that moment I became more hearing from a medical point of view, but I became also more deaf from an identity point of view. And so I chose to delve more into the topic of deafness and accessibility. And that was when I opened my Instagram page, which is called the undeaf, because I don’t feel hearing nor deaf.
And that was when I met Elena, and we started studying, and working together to make the world a more accessible place. And that was also when I decided to be a teacher for the people with disabilities in high school here in Italy.
[00:06:26] Nathan Wrigley: Thank you very much. Can I just ask you, you said that your medical deafness had improved, but your, I think you said identity. Did you say identity? Your identity deafness or something. What did that mean? What did you mean by that?
[00:06:42] Chiara Pennetta: I think that deafness is considered a disability by most people. So the lack of hearing. But deafness is also a community, a culture, and a language. Because if you start studying sign language like I did four years ago, you will find out that there is a new whole world to discover about deaf identity.
For me, that was life changing. It’s a paradox because I found out about my deaf identity only when I improved my hearing. It’s a paradox I know. But my cochlear implant are more visible than the hearing aid I have had before. And so my deafness became more visible, but also more invisible because my speech improved. My access to the world of sounds improved. Also my identity, like I said before.
[00:07:42] Nathan Wrigley: Yeah, that’s really interesting. Okay, thank you for clearing that up. I appreciate it.
So during the course of this interview, I will be asking questions, and it doesn’t really matter who wants to take it, but thank you for the introductions.
So let’s begin then. So the first question that I’ve got really is, well, I should probably introduce what you spoke about at WordCamp Europe. Your topic was called Digital and Linguistic Accessibility Techniques and Strategies for deaf people. So we’re going to try and explore that a little bit.
And my first question then is, what kind of experiences on the web are different if you are deaf? So I am in the fortunate position to have good eyesight, I wear glasses, but I have good eyesight. You know, my hands and my legs all work. I can hold a mouse, I can type on a keyboard, and my hearing is good. So I genuinely don’t know what it would be like to browse the internet, to move around online if I couldn’t hear. So could you just describe the difference it would be, for me, if I had deafness.
[00:08:44] Chiara Pennetta: Sure. I think that the biggest difference is about all the video, and sound, musical content, of course. So anything that you experience with your hearing is different, if you don’t have hearing at all, or are partially deaf. Any video, any podcast of course, any web content which is accessible by audio feed, needs to be supported with the captions or transcription, or like signals. I don’t know, maybe it is not web related, but in movies when you have captions, and someone knocks at the door, the captions signal that there is someone knocking. So sound signals and transcription is really important. I think that that’s the most important thing to say.
But yesterday, Elena and I really focus that, in many occasions, captions and transcription are not enough. Because many people who are born deaf, and maybe don’t use hearing aids, sometimes are not proficient in understanding an oral spoken language, even if it is in a written form.
So they can have struggles in understanding complex, and the longest sentences with the complex syntax, and the vocabulary. Elena, which she’s expert in language accessibilities, really gave tools to write in a way that is accessible to people who don’t have a high level of proficiency in understanding a written text.
[00:10:22] Nathan Wrigley: Right, so the assumption then that if I, for example, produce a video, and I’m very, the word I’m going to use is verbose, meaning I use too much language, I’m talking but my sentences are complicated and what have you. Are you suggesting then you need to bear in mind making it more straightforward? Because if I’m reading it, it might also be the case that my level of cognition, understanding what I’m reading, may also impact my ability to understand that content. So whoever wants to take that.
[00:10:54] Elena Panciera: Yes, actually, we have to keep in mind that not everybody has the same level of proficiency in any language, and especially in Italian. Because in English it’s quite easier to have shorter sentences, and since it’s a very commonly used language amongst a lot of people that have other languages as first languages, English is more simple. But in Italian, for example, we commonly use very long sentences, with a lot of clauses that are not coordinate clauses, and subjunctives, and a lot of difficult words.
The oral Italian is simpler, and the written Italian, we tend to use more difficult words, and more difficult structures. And so it’s important to keep it simple, and to try to think that not everybody have Italian as a first language. And this happens also for every other language. So for English, but also for German, for Spanish and so on. And so we have tools that can help us trying to make a text simpler.
We have also artificial intelligence that can help us. There are ChatGPT, and Gemini, for example, but also tools that are integrated into WordPress. For example, Yoast or Semrush have tools that help simplify text, and choose simpler words, and more common words, for example.
[00:12:43] Nathan Wrigley: Can I ask you a question about this podcast? So when I have finished editing this, I will make a transcription and I will try to make it perfect, exactly what you say. However, sometimes, because you are speaking in your native language is Italian, I sometimes take the position that it’s better for me to translate it into the English that I would like you to have said, if you were an English speaker. What do you think about that? I’m sometimes faced with the difficult thing, okay, I think they meant to say it this way, so I’ll write it that way, whereas you didn’t actually say it that way. What should I do?
[00:13:25] Elena Panciera: Actually, this is a very interesting question, because I think there are also different positions about that. Because sometimes I’ve been told that it’s better to have very, very, transcriptions that are perfect. And so they transcript exactly what the speaker said. But actually, in terms of accessibility, maybe it could be more interesting, even simplify a little bit the language. So make the sentences shorter and, I don’t know, adapt a little bit the transcription.
[00:14:02] Chiara Pennetta: I would like to add something on this topic. I don’t know if you ever noticed that in many movies when you buy the DVD, or Netflix too I think, you can choose from normal captions, basic captions, and the caption that are all called subtitles for the hearing impaired.
[00:14:22] Nathan Wrigley: I have not come across that. So this is new. It’s interesting, keep going, yeah.
[00:14:25] Chiara Pennetta: Sometimes you have two kinds of transcription for movies, or a TV series because basic captions transcribe literally everything that the actors say. While captions for deaf people are often simplified in the vocabulary and grammar, because maybe the proficiency language is a bit different, and they need a language structure which is more simple and accessible.
[00:14:53] Nathan Wrigley: And also, I would imagine keeping up. Being able to read at the speed that somebody is saying something. So if a, let’s imagine a movie on Netflix, if an actor is speaking fairly quickly, and you are transcribing every single word, it’s hard to read at the speed of speech. If we just take out unnecessary words and just sum up the idea behind it. That’s really interesting because you have to have an opinion about what they intended to say, and how much you can simplify it, because you might lose some of the context.
But you can do both. You put the exact transcript, but you have the secondary choice, I’m going to use air quotes, the simpler version, fewer words, easier to understand, less complicated words, shorter words. Ah, that’s really interesting. I had no idea that existed. Do you want to add something to that?
[00:15:46] Elena Panciera: This is the principle of simplify and facilitate. So there are these two different concepts, the simplification and facilitation.
Simplification means that we adapt and change the original text. And with facilitation, we can add tools to help the reader understand the text, but the original text that remain the same. You can add glossaries. You can add another version, a simplified version, or a shorter version. Because we could have also difficulties in keeping up the attention for a long time, and to read a very long text, this could be difficult. So yeah, we have different tools to improve the accessibility.
And actually accessibility is multimedia, and going across different tools, and different medias. For example, the transcription is different from the audio, and also we can add graphics, illustrations, or visual tools to improve the understanding for text.
[00:17:03] Nathan Wrigley: What do these tools do? So I’m imagining that I’ve transcribed this, and I’ve got my perfect, in air quotes, transcription. I’m really happy with it. Do I then just copy and paste what I’ve got, put it into a tool, click a button, and it will go through it with artificial intelligence, for example, and just shorten everything, make it more brief, as we’ve just described. What do these tools do? And could you name a few, because I don’t, I’ve never heard of any of them, so I’d like to go out and find what some of them are.
[00:17:33] Elena Panciera: Actually, yes, there are some tools like ChatGPT, Gemini. Or specific tools like Capito that can help you doing this in a automatic way. But actually it’s always better, in my opinion, to check what the artificial intelligence do.
The human intelligence is still better than the artificial one. And actually you can use your critical sense. I don’t know if I can use this in English? Yes, because you can really think what is important, and maybe you can also ask, in your case, you could ask us, what would you mean with this sentence? Because it was not in a perfect English, so we can tell you and make it right. I don’t know how to say, but let the mistakes be only in the oral podcast, and make a better version in the written text.
[00:18:34] Nathan Wrigley: Right, so a good example of that that’s coming into my mind might be, if both of us said, I don’t know the word like a lot. It’s commonly used in English, we say like, like, like, like, and also um, um, you know, those kind of things. Would you want that to be kept in a transcription, or even the audio? Because I tend to, when I edit this podcast, I tend to get rid of the bits, well, for example, of silence. There might be five seconds of silence where you think of your answer in this podcast episode. I take the position, let’s get rid of that silence. Let’s make it just so that one word follows the next word.
And also I take out the ums, and the ahs, and the likes and things. I’m really making a completely new version of what you said. Is that a good idea? Would you prefer to listen to exactly what we record today? Or is it okay to edit things out? Is there a better way of doing it?
[00:19:28] Chiara Pennetta: I think oral version of a thing and the written version, or the painted version, or the dance version of something are, of course, different things, because they’re different products. So we can’t expect to be the same, it wouldn’t be right. They are two different things, so it’s normal. That’s not the same thing because you can, you perceive those two things with two different senses. And so I think it is right to delete the ums and the like, because it will be not pleasurable to read a text full of ums and like.
It depends also, what is the main goal that you want to achieve with that text? It is more important that people can understand what the vibe was, or what the content was.
I think that it is also really personal because, for example, I can hear pretty well with my cochlear implant, so I like my caption and my transcriptions to be really faithful to the original. Because if I hear, and at the same time I read, I like those two things to be the same. Or maybe if I watch a movie, I like that, when I see the lips moving, I hear the same thing, and I read the same thing on my captions.
But other deaf people, or maybe other people who need other kind of accessibilities, like people with learning difficulties, or people who with cognitive disabilities, they may need, and want, a different kind of test, to more get to the point, to more understand it. So I think that it is impossible to have a solution which is perfect for everyone.
[00:21:18] Nathan Wrigley: I guess one of the things that I could do, if I had enough time and enough resources, I could also translate this from English, and have it in Italian, or Spanish, or Portuguese, or whatever language we can imagine. It will just be done in English though.
In a way, the ability to read a language gives you more options to make something visible, available to people from different cultures. If it’s just a YouTube video with no captions, no subtitles, and it’s just in English, only English people, or not English people, English speaking people can understand that.
If you throw in the captions and the subtitles, you can actually make the exact same piece of content available to the whole world. If you could translate it into Chinese, and Japanese, and everything. But unfortunately, that’s not going to be the case, in this case.
Okay, back to the questions. This may seem like a silly question because, as I said, I have good hearing. Does the amount of deafness, and I don’t know what the correct word is for that but, for example, imagine somebody who has no hearing at all. Or somebody who has partial hearing, maybe one ear isn’t working as well, or if they can hear half of the sounds, or anything in between. Is there a difference there? Do we have to make things available in different ways? Just tell us what that’s like.
[00:22:35] Chiara Pennetta: Yeah, I think we should talk about deafnesses, plural. Because we have a wide variety of shades of deafness. Mainly since when a technology came in, I don’t know if 70 years ago we didn’t have hearing aids and cochlear implant, which work in a certain way.
Today, we have a wide amount of technology that can help retrieve our hearing. But even if two people with the same level of hearing loss use the same kind of hearing aid, it is not obvious that they could gain the same amount of hearing gain.
So definitely is a wide word, it depends on when you lose your hearing, before or after the age in which children learn to speak. And so we have prelingual or postlingual deafness.
And of course you can lose your healing at different levels. So mild or profound hearing loss are different. Or we have deafness for elderly people, so late life. It’s a really, really wide word.
Some people learn sign languages since they are little children because they are born in deaf families, into deaf culture. Some others don’t, like I became deaf when I was one year and a half, and I was born in a hearing family. So I didn’t learn sign language up until four years ago for my personal curiosity.
That means that I learn how to speak through hearing aids and speech therapy. But I know many people who have the same amount of hearing loss as me, from a medical point of view, but have really different speech ability and language ability. We have a very wide, a very big variety of deaf world.
[00:24:32] Nathan Wrigley: So it feels to me like, when the internet came along, I don’t know, 1995 or something like that, that being a deaf person, you could access almost all of the internet because it was just images and text, that’s how the internet began. But more and more, fast forward to 2005, or whenever YouTube came along, video content started to rapidly take over. And now with TikTok, and apps, and the phone that you’ve got in your pocket, it feels like there’s more and more content that’s inaccessible to deaf people. Is that a trend? Is it becoming harder to use the internet? Things like YouTube, like I say, and TikTok, is that kind of taking over and making it more difficult?
[00:25:15] Chiara Pennetta: I don’t think it is becoming harder. I think that, maybe in the beginning, when videos weren’t captioned, yes. But now, we have automatic captions on TikTok, on Instagram mostly, and on YouTube. So even if the content creator doesn’t put on their captions, or their transcription, it is possible to activate automatic caption or automatic transcription.
[00:25:41] Nathan Wrigley: Can I just ask, do you need to tell TikTok or YouTube to do that, or does it automatically do it for you?
[00:25:48] Chiara Pennetta: Yes, it’s the user responsibility because you have to click on a button to activate captions. But I think that deaf people’s lives changed with the internet because, I don’t remember the name, but back in the old days, it was invented like a phone who transcribed the phone calls. And when we started having video calls, deaf people were so happy because they could call each other, seeing each other, and use sign language during their phone calls. And so communication really exploded. It was a right that came a bit later for the deaf community, the signing deaf community of course.
[00:26:30] Elena Panciera: I think that also, the ability to read lips with video calls is a good accessibility sign. And captioning is getting better actually. And also, captioning is not used only by deaf people. Actually, I am one of those people that need captions because otherwise I hate the vocal messages. And I hate these kind of messages that are only oral, and so I prefer to read. And I’m not deaf actually, so accessibility can really help a lot of people, even if they’re not with disabilities.
[00:27:14] Nathan Wrigley: From everything that you’ve said, it really does sound like technology has improved dramatically the life of people who are deaf, and obviously you said it’s a spectrum. Yeah, when you mentioned about video calls, I hadn’t even thought about that. But suddenly you can see the person, lip read maybe, but also the sign language. It just made communication over a distance suddenly possible.
You could speak to people from all over the world, not rely on hearing, which of course on the phone call, the regular phone call, if you can’t see it, you couldn’t interact with it at all. Profound.
So technology is really helping. And I’ve got to say, dear listener, in front of me, about two feet away, I don’t know whose it is, but is a mobile phone. And the mobile phone is facing you two, and it’s translating in real time. And I’m watching it now, you’ve just turned it around so that I can see. So it’s two feet away, it must have the microphone switched on, and it’s translating everything I’m saying, and it’s translating it literally perfectly. It’s astonishing. And this is, what, just Google? Just Google?
[00:28:20] Elena Panciera: Yes. It’s like a Google transcribe. There is an app free on the Google store. And this is my phone actually, because I have an Android phone, and she has an iPhone, that is more accessible for other things, I think.
[00:28:38] Nathan Wrigley: But, whether you’ve got an Android phone or an iPhone, I am blown away by how good that is. And obviously I’ve never had to use it, so I’ve never really seen it. Can I just ask, dare I say, we’re going off message a little bit. Would that also enable me to talk to you in Italian? So for example, could it translate my English words, and the text on the screen would then be Italian? Can it do that as well?
[00:29:03] Chiara Pennetta: Well, no. I think that you can, no, I know that you can set the language in which you are speaking. So if you now start speaking in Italian, I have to switch, on the app, the language to Italian, and it’ll start transcribing in Italian, or in any other language.
But if I wanted a live translation too, I think I’d have to use a computer, and open another app with artificial intelligence, that could transcribe and translate at the same time. There are application that can do that.
I actually saw, maybe yesterday, I saw on the laptop of the people who are helping out in the speech on the track one, and they had an application I think that said, if you want to change, on your device, the language of the captions, you could. So I think there may be people who couldn’t understand English, because yesterday we spoke in English, and there were also captions on the main screen, projected in English, that transcribed like now.
But I think maybe you could download an app or something that also translates into another language. So this is all the work of artificial intelligence. So I think that maybe because languages are so rich, and different, I think it is impossible to achieve a perfect translation. For the main content I think it’s useful as well.
[00:30:40] Nathan Wrigley: Genuinely, I’m astonished by how good that is. And it’s enabling me, I mean you could have had headphones in, but you’ve chosen not to, but I’m noticing that you are looking down at it, and so you are seeing and reading what I’m saying in real time. That is profoundly amazing. This kind of stuff must make life a lot more straightforward, and a lot simpler. Yeah, that’s absolutely brilliant.
Off pieced again, how do we do at WordPress events, helping out people who are deaf? Do we get it right? Do we have the right balance of, I don’t know, sign language, captioning? Do we do pretty well, or is there work that we still need to do?
[00:31:18] Chiara Pennetta: Well, I am very happy with the captions, but I didn’t see any sign language interpreter. I don’t know if there, I didn’t see any, that’s my opinion, my experience. But I also didn’t ask for it because, even if I know sign language, I don’t rely on it to access to the world. So maybe if I ask for it, it would be possible to gain an interpreter.
But I think the problem is, I only know Italian sign language. Since there are many sign languages in the world that I don’t know, British or American sign language, I think it’ll be difficult to find an interpreter who can listen to English, and then in his or her mind, translate into Italian, and then translate again into Italian sign language. There are people who can, amazing people, but it’s not easy.
Or we could ask for an international sign interpreter. And the international sign is not an actual language, but it’s a mix of different sign languages, mostly based on ASL American Sign Language. And so I think that I could understand it, but I never tried.
But I’m really happy with the organisation. It was the first time for me to attend the WorldPress conference, nevertheless WordCamp Europe, so I felt really welcome, and my needs were met.
[00:32:49] Nathan Wrigley: Good. Yeah, well, I’m pleased to hear it. I’ve no clue really as to whether or not, at some events, they do have sign language. But yeah, that’s an interesting point, I hadn’t really thought about that. It would have to be, the sign language would have to be, in English or American English, whichever. And then you would also have to understand that, as somebody that was using sign language, you’d have to not only be able to use the English sign language, but then presumably you’d have to translate it in your own head into Italian. So that, yeah, that would make life really difficult.
[00:33:18] Elena Panciera: Yeah. Actually, in this kind of event, I think that the best solution is to ask people attending the conference, what do you need? Because actually, maybe you could have a person from France that communicates with French sign language, and they could need French sign language. And so it’s easier to meet their need, and to ask French sign language interpreter to help them. Because otherwise it could be really, really, expensive, you know, to translate versus toward every language.
[00:33:57] Nathan Wrigley: Yeah. You can imagine there’d be 15 people standing on the stage, with the French translation, or the sign language, the English, the, I don’t know, Portuguese, or whatever. Yeah, that could be interesting.
So I know neither of you are lawyers. We established that at the beginning, but just, I’m near the end now, my question is all about website builders, people who build websites. That’s what most of us are here for. We’re building websites using WordPress to do that.
Is there any responsibility, any legal things that we need to know about? I know that in 2025 we’ve got the European Accessibility Act. Will it become required, for example, with video to have captions, things like that? Do you know? And I, again, I stress, you’re not lawyers, we understand. But, do you know if we, as website builders, are compelled to do that? I know that we should do that, it would be the right thing to do, and the moral thing to do. But I think sometimes it’s easy to not do those things because it’s cheaper, quicker, all of those things. So do you know if there’s any compulsion, any legal reason that we must do this?
[00:34:58] Elena Panciera: Actually, yes. From the 2025, we have to think in a different way about accessibility because there is the European Accessibility Act that is going to be followed. We have to follow this act.
Actually, there are different levels of accessibility, and the captions, for example, they’re quite simple, and quite cheap with artificial intelligence. For example, a translation in sign language would be another level of accessibility that is not mandatory. It depends on how the budget is, and how also the aim of the company is.
Because maybe if a company know that it’s public, its audience is mainly, or a huge amount of people is deaf, it could decide to invest some money into having a sign language interpreter, for example, that is not mandatory. But the captions, well, are supposed to be there. And, yeah, also other things, transcriptions, but yeah.
[00:36:14] Nathan Wrigley: We’ll have to wait and see, won’t we, how well it’s enforced next year. Because we’ll have the legislation, but it’ll be interesting to see if people adhere to it. And if they don’t adhere to it, whether or not they get, well, punished, for want of a better word. We’ll just have to see how that goes.
I think with the time coming up to nearly the hour, we’ll knock it on the head, as we say in the UK. We’ll end it there. But thank you for speaking to me about this today. I’m going to be really curious what you make of the transcription, and the edit of the audio that I do from this. I’m fascinated to see whether I over edited it. Whether I took out the ums, and the ahs, and the silences that we had, we are going to edit some mistakes out and things like that.
But I’m more curious about what you think of the transcription that I do, and whether or not it’s what you would’ve liked it to have been, or whether or not I overdid it. So let’s wait and see. I’ll send it to you before it comes out. But thank you so much for chatting to me today, both of you. I’ve really enjoyed it, and I’ve learned a lot. Thank you.
[00:37:11] Chiara Pennetta: Thank you to you.
[00:37:12] Elena Panciera: Thank you. It was really a pleasure to be here.