Social engineering attacks are one of the most used attack vectors that customers face in today's chaotic and often very challenging technology landscape. I have talked about this in previous articles and my book but I sadly have to tell you that there is another method that is being utilised by malicious actors that are going to be hard for people to pick up.
Deep fake technology is a bit of a hype technology that has been getting around for around a year now and was in my opinion just a bit of fun with no real threat to businesses with regards to information security as it didn’t really work properly and was easy to determine that it was a fake voice or video that was generated by a computer system. However, that is no longer the case and several instances have come forward in which malicious actors have used the voice creation part of deep fake technology to generate voicemail messages or have a generated conversation on the phone with employees of a company indicating that they were the CEO or finance manager. They would request payments to be made or account details to be changed.
In all of these instances that I am aware of these deep fake voices were used in combination with an email sent in a similar method to many we have all seen before from an email address created on generic hosting platforms with the CEO's name or whoever they were impersonating. This would give the email validity if they heard what appeared to be the voice of that person on voice mail or a quick sharp call that ended quickly (so the person couldn't ask questions).
There are many versions of the technology available with one of them being Lyrebird which does a great job of creating what they call a voiceprint or voice map. What happens is very similar to the process that you go through when setting up your google home or Siri or whichever personal assistant has its tentacles wrapped around your particular platform. You are asked to say some pre-mapped out items in which it uses to generate your voice patterns so that it can easily mimic you.
I know what you are thinking how can they do this with someone that is not voluntarily saying the required phrases? I’m glad you asked. It’s easy actually in many cases as CEO’s are not normally shy individuals and will in many cases have video recordings of them on the internet from interviews or company promotional videos or whatever source they can find. If that isn’t the case they could easily infect a device with webcams being quite an easy part of a systems to gain access to on many occasions (have a google for recent large scale breach allowing access to thousands upon thousands of webcams) and then just record/process the voice recording to get what they need.
Honestly, this method is harder because you need a much bigger sample of the person's voice, something in the vicinity of 8 hours but if you have access to the person's webcam I am sure you will be able to get it. We are all at risk of this threat, not just our users. Would you say no to your CEO if they requested you do something now for them? Most wouldn't honestly ask some people in your office what would they do?
Let's layout a scenario, the CEO calls them and leaves a voice mail says they need a payment of $$$$ made to this account in this person's name today for a new project ****. Many of them if they heard the voice and it was the CEO or financial controller they would ignore any normal checks and just do it. Seriously ask them. I know we have all been working hard on getting people to have a secondary check process, maybe even a third but this blows the normal logic out of the water. We would normally instruct users to call and verify that this is correct before doing changes like this or making irregular payments but they just heard the CEO himself tell them to do it. That wasn't them in this scenario but how would they know that? They wouldn't this isn't something we have warned them about before but we need to ensure that they are aware of this possibility and we need to adapt our procedures to ensure that this threat can be managed.
Still not convinced that this is a real threat? Check out this video from Journalist Ashlee Vance. It was almost perfect with replicating his voice and it was from over ago. The technology has come a long way since then and in my opinion, it was pretty good already.
So now that I have you all convinced it is a real threat, what can we do to help stop this threat? I think we just need to change up our procedures a little to cope with this new attack method. If a user receives a voice mail and email from someone appearing to be the CEO or Finance manager or whoever, staff need to take the information at face value only. Once the call has finished, email the internal company email for the correct person, call them on already known numbers to confirm that the request is real. If they are not available then take the request to whoever is in charge in their absence to validate if it should take place or not and they can make the final call.
In my opinion, if you can verify the request then your procedures should allow the staff to decline the request no matter who the person is doing the requesting. All requests should follow correct company procedures with no exceptions to that rule. I know people don't want to get in trouble from the CEO but in cases like this senior management needs to back up the procedures and there should never be consequences for staff following correct procedures that is their job after all.
It's a scary new world we live in and it's getting harder to know what you can and can't trust but I think if we create solid procedures and stick to them we will be okay. Don't just push this to the side though and think this won't happen as it will and you need to be prepared. A little bit of preparation and you won't have egg on your face when it does? You can thank me later.
As always, tell me what you think? Have you experienced this type of threat already? Do you think I am worried about something that will never become a mainstream attack used by malicious actors? Seriously I want to know what you all think.
Till next time…