The liveries of Alaska Airlines and Virgin America, which Alaska acquired. Photo courtesy of Alaska Air Group
A month after Alaska Airlines closed its acquisition of Virgin America in December 2016, the Seattle-based airline had been hacked.
Alaska discovered a malicious threat actor known to the U.S. Defense Department had abused a loophole in Virgin’s security to gain access to the system, according to Director of Security Architecture Jessica Ferguson. The government won’t let her share the hacker’s identity, but resolving the issue cost Alaska $2.5 million dollars in aggregate and three months — plus, employee information might have been compromised. The process also highlighted some damning issues endemic to the aviation industry’s aging business model that can and will continue to be problematic in the 21st century’s high-tech, connected world.
“It’s a huge concern,” said Ferguson, an alternate board member of the cybersecurity-focused Aviation International Sharing and Analysis Center.
To get in, the hacker used a remote-access toolkit to exploit an Apache Struts vulnerability (an open-sourced web development application; the problem was that it wasn’t updated to the latest version). It’s the same thing that enabled the Equifax breach last year. From there, Ferguson said, “the actor moved laterally inside the environment,” using its access to jump to other systems where more desirable information or capabilities might be housed.
Fortunately for Alaska, the airline had kept — and still keeps — all of Virgin’s environment separate from the rest of the Alaska environment. That helped to keep the potential damage contained and away from the main brand. Ferguson said that decision was part of the security team’s strategy while investigating Virgin’s systems for vulnerabilities and learning everything they needed to, and it paid off.
Unfortunately for Alaska, the point of entry had been a vendor-controlled system housing Airbus technical manuals. That was required to be online and accessible by both the vendor and the FAA, so Alaska couldn’t neuter the threat by simply taking the system offline, which Ferguson said would be the normal defense strategy in such a situation. Further muddying the waters, the airline had to wait on the vendor to provide an update to the system, patching security. Their hands were tied.
“We had to work with them to get a patch, and initially they came back and said, ‘Oh, well, this will take six months.’ And we said, ‘No, we can’t wait six months because we have people currently exploiting vulnerabilities to get inside of our environment,'” Ferguson recalled. “They did eventually turn that around; I think it took them about a month.”
In the interim, however, all Alaska could really do was watch and set up some perimeters.
“With the system, now, we have software and visibility into it,” she said. “You can see the threat actor trying to execute things on it, and we would try to monitor and watch it, but there was really nothing we could do to stop it. We just kind of had to sit there and watch and make sure they were not going to be able to break out of the security controls that we put in place around that system.”
Once Ferguson’s team had everything they needed to expel the hacker and fix the vulnerabilities, it still took some time to make sure the coordinated effort would be effective. They had to take everything down at once, reset usernames and passwords for accounts across the company and make sure there would be no re-entry for their enemy over a 48-hour time period.
“When you’re dealing with a sophisticated hacker like we were, you really have to take them all out at once. If you leave any stone unturned, they will use that and then just compromise more systems,” Ferguson said. “I have experience like that where you don’t get everything the first time and then they re-compromise the system again, and you’re kind of playing whack-a-mole with the threat actor.”
In the end, the possible exposure of employee information was the biggest damage, Ferguson said. There was concern about the exposure of passenger data and credit card information as well, though. And, while Alaska doesn’t think the hacker realized it, the hacker had gotten access to the ACARS system.
“The worst case scenario from there, from an ACARS perspective, is they could potentially send erroneous text messages to the aircraft,” Ferguson said. “There was never any point where it could put aircraft safety in line but, obviously, it’s not something where we would want that actor messaging or fiddling around with the ACARS system.”
Alaska Airlines’ Jessica Ferguson speaking at the Global Connected Aircraft Summit 2018 in San Diego. (Avionics)
After emerging from the nightmare scenario of watching her system hacked and being unable to respond — she was, by the way, in her third week on the job at the time — Ferguson has advice for other airlines who might find themselves similarly positioned:
- Visibility is key.
“One of the challenges that we found very early on was that we had very limited telemetry inside of our environment to be able to see what the actor was doing,” Ferguson said. “I liken it to watching what the actor was doing through a pinhole. Then we got up some metasystems, we improved the telemetry, got in some endpoints; then it was like looking at the whole world.”She also said that full-packet capture, of the ability to record and analyze all network activity, is something that Alaska has been working on since the breach to ensure her team’s ability to analyze their own system is top-notch. “We didn’t have (that) going into the breach that really caused us a lot of a lot of headaches and quite a lot of wasted time.” - Develop a relationship with leadership.
Crucial to its effective response, Ferguson said, was buy-in from Alaska’s leadership. Executives trusting that the security team knows what they are doing is important in the event of a hack; otherwise, they might get impatient and force the team to take half-baked action before the proper time. “That is key, to have that relationship built, especially if you can if you can show to that executive team or that board, that you’re taking these steps and making this plan in advance,” Ferguson said. - Expect a breach.
“The biggest thing, I would say, is … I would always plan for a breach to happen,” Ferguson said. “I always assume that our environment is breached somewhere. I may not know about it yet, but it probably is. And having that security incident response plan in place, testing that security incident response plan is key, and that is a big initiative for us.”
Beyond a general plan-for-the-worst, hope-for-the-best mentality, there’s good reason to always expect a breach. What happened to Alaska wasn’t a random, isolated incident; it was a side effect of the aviation industry’s business model, Ferguson said. Because airplanes and plane parts are expected to have decades-long life cycles, the turnaround for OEMs is often long and slow. But, in today’s agile, connected world, patches and updates to systems happen constantly. If things aren’t always kept up-to-date, vulnerabilities are created. The military has run into the same issue with electronic warfare.
The solution is an industry-wide overhaul that sees everyone work more closely in the supply chain and across the industry (even among competitors, Ferguson said) to share data on threats and security as quickly as possible. Right now, the time it takes for a product to be updated, certified and passed through the chain for implementation is just asking for another hack.
“We deployed a crew-tracking system,” Ferguson said. “I’m not going to name names, but by the time we deployed the system, it was already vulnerable. The patches on the system were old by the time we went live on it, and now we’re rebuilding it all again.”