Formula 1: A Technical Deep Dive into Building the World’s Fastest Cars

F1 drivers experience similar g-force to Apollo astronauts during Earth re-entry. Here’s how they design and make the cars.

For over 60 years, Formula 1 teams have developed, tested, and built the fastest and most technologically impressive cars the world has ever seen. An almost unending list of superlatives can be ladled onto F1 cars: they can accelerate from 0 to 190mph in about 10 seconds, fling around a corner at such speeds that the driver experiences g-force close to that of an Apollo astronaut during Earth re-entry, and then decelerate by 60mph in just 0.7 seconds thanks to strong brakes and massive downforce—the same downforce that stopped the car from spinning out around that corner.

But the bit that’s really impressive is that these machines are designed and built from scratch every year. That’s what makes F1 so competitive, and why the rate of improvement is so rapid. These teams—there’s only about 10 of them, and most are based in England—have been challenging each other to make a new best-car-in-the-world every year for 60 years. The only way to pole position is to try and find an edge that no one else has thought of yet, and then to keep finding new edges when everyone inevitably catches up.

As you’ve probably guessed, materials science, engineering, bleeding-edge software, and recently the cloud are a major part of F1 innovation—and indeed, those fair topics are where we lay our scene.

For this story I embedded with Renault Sport Formula One Team as they made their final preparations for the 2017 season. As I write this, I can hear this year’s cars being tested around Circuit de Barcelona-Catalunya; a Mercedes car has just set the fastest lap time, and we’re all silently wondering if they will dominate once again.

After a difficult 2016, things are looking up for Renault Sport Formula One Team in 2017. They’re back with a new chassis and a new, fully integrated Renault power unit. The engineering teams have been reinforced with new recruits and the acquisition of state-of-the-art tooling and machines. Planning, design, and international collaboration and communications have been bolstered with a renewed partnership with Microsoft Cloud. And F1 legend Alain Prost is on board to advise drivers Nico Hülkenberg and Jolyon Palmer.

How will they fare? I don’t know; I’m a tech journalist, not a motorsports correspondent. But I can tell you how they built that car—or more accurately, how they built and scrapped thousands of possible, prototype cars in their search for one championship-winning design.

The discovery of downforce

For the first thirty years of Formula 1’s history, the cars were mostly dumb mechanical beasts; not much mattered beyond the driver, tyres, and power train. Then, in 1977, Team Lotus (quite different from the recent Lotus F1 team, which then became Renault Sport Formula One Team) started paying more attention to aerodynamics—specifically the ground effect, which in the world of motorsport is usually known as downforce. The underside of the Lotus 79 F1 car was curved like an upside-down airplane wing, creating a pocket of low pressure that essentially sucked the car to the ground.

The Lotus 79 was massively successful and before long—once the other teams eventually sussed out Lotus’s black magic—every Formula 1 car was sculpted to provide maximum downforce. One design, the Brabham BT46 (pictured above), even had a big ol’ fan that sucked air out from beneath the car.

Over the next few years Formula 1 got faster and faster, especially around corners. Eventually, following a number of accidents and the death of Gilles Villeneuve in 1982, the FIA mandated a return to flat-bottomed cars. The aerodynamic cat couldn’t be put back in the bag, though.

25 teraflops and not a drop more

Almost every area of technological and engineering advancement in F1 has followed a similar path to aerodynamics. A team finds an area that hasn’t yet been regulated by the FIA, or where existing regulation can be creatively interpreted; that team pushes to within a few millimetres of the regulations, sometimes stepping slightly over the line; other teams follow suit; then the FIA revises its regulations and the cycle begins again.

As you can imagine, then, after some 60 years of trying to outwit the feds, Formula 1 today is governed by a rather long list of regulations—hundreds of pages of them, in fact.

For example, each Formula 1 team is only allowed to use 25 teraflops (trillions of floating point operations per second) of double precision (64-bit) computing power for simulating car aerodynamics. 25 teraflops isn’t a lot of processing power, in the grand scheme of supercomputers: it’s about comparable to 25 of the original Nvidia Titan graphics cards (the new Pascal-based cards are no good at double-precision maths).

Oddly, the F1 regulations also stipulate that only CPUs can be used, not GPUs, and that teams must explicitly prove whether they’re using AVX instructions or not. Without AVX, the FIA rates a single Sandy Bridge or Ivy Bridge CPU core at 4 flops; with AVX, each core is rated at 8 flops. Every team has to submit the exact specifications of their compute cluster to the FIA at the start of the season, and then a logfile after every eight weeks of ongoing testing.

Renault Sport Formula One Team recently deployed a new on-premises compute cluster with 18,000 cores—so, probably about 2,000 Intel Xeon CPUs. While the total number of teraflops is strictly limited, other aspects of the system’s architecture can be optimised. For example, the team’s cluster has highly parallel storage. “Each compute node has a dedicated connection to storage, so that we don’t waste flops on reading and writing data,” says Mark Everest, one of the team’s infrastructure managers. “There was a big improvement in performance when we changed from our old cluster to the new one, without necessarily changing the software,” and with the same 25-teraflops processing cap, Everest adds.

Everest says that every team has their own on-premises hardware setup, and that no one has yet moved to the cloud. There’s no technical reason why the cloud can’t be used for car aerodynamics simulations—and F1 teams are investigating such a possibility—but the aforementioned stringent CPU stipulations currently make it impossible. The result is that most F1 teams use a somewhat hybridised setup, with a local Linux cluster outputting aerodynamics data that informs the manufacturing of physical components, the details of which are kept in the cloud.

Wind tunnel usage is similarly restricted: F1 teams are only allowed 25 hours of “wind on” time per week to test new chassis designs. 10 years ago, in 2007, it was very different, says Everest: “There was no restriction on teraflops, no restriction on wind tunnel hours,” continues Everest. “We had three shifts running the wind tunnel 24/7. It got to the point where a lot of teams were talking about building a second wind tunnel; Williams built a second tunnel.

“We decided to go down the computing route, with CFD—computational fluid dynamics—rather than build another wind tunnel. When we built our new compute cluster in 2007, the plan was that we’d double our compute every year. Very quickly it was realised that the teams with huge budgets—the manufacturer-backed teams—would get an unfair advantage over smaller teams, because they didn’t have the money to build these enormous clusters.”

Soon after, to prevent the larger F1 teams from throwing more and more money at aerodynamics, the FIA began restricting both wind tunnel usage and compute power for simulations.

Building a new car every two weeks

While there are severe restrictions on Formula 1 chassis testing, other aspects of F1 are surprisingly unrestricted.

The software used to design car parts (CAD), to manufacture those parts (CAM), and to simulate their efficiency are all unregulated. Teams are free to use whatever software they like, and it can be run locally or in the cloud. For CFD, the F1 team’s chassis factory in Enstone and the French power unit division in Viry-Châtillon both use the STAR-CCM+ suite made by CD-adapco (recently acquired by Siemens). The team partners very closely with CD-adapco to optimise the software for its needs. Both divisions use Dassault Systèmes’ CATIA suite for CAD and CAM.

Every Formula 1 team has its own software setup, which is then tweaked so that it integrates with an F1 team’s myriad other systems: the aforementioned CFD and wind tunnel systems, the rapid prototyping and manufacturing systems, and then ultimately on to the stock tracking system, which is vital for making sure you actually turn up at a race with a working car and enough replacement parts.

While Formula 1 is the epitome of bleeding-edge engineering, the teams are still susceptible to the plague of legacy software. For example, until recently, Renault Sport Formula One Team used a 77,000-line Excel spreadsheet to track the design and build of the season’s new car. “It was a mix of data exports from the old ERP [enterprise resource planning] system, some transformations, some manual work. It was pretty much out of date as soon as it was created,” says Everest.

Now all of that data is kept in Dynamics 365, a much more capable and adaptable system. Dynamics 365 is based in the cloud, making it much easier for the team’s employees to access that data from wherever they might be: England, France, or trackside at the 20 races that will be held in 2017.

“It’s also visually better and more useful than what we had in Excel, thanks to Power BI,” Everest says. Power BI is a visualisation and analytics tool that pulls real-time data from Dynamics 365. “We can create dashboards with a high-level view for the execs and then drill down to a higher granularity for users who are working on a specific area of the car.

“It’s in very high demand across the company,” Everest says with a chuckle. “As soon as we roll out something in Power BI to one department, another department sees it and says ‘we want that, too!'”

Logistics

Even beyond car design and testing and towards other important areas such as logistics and collaboration, Renault Sport Formula One Team is “a big Microsoft shop,” says Everest. “Some of the clusters run Linux, because the software is very specific on there. But mostly it’s Windows, for server and client. We use Office 365 for collaboration and e-mail in the cloud, Sharepoint Online, and we’re rolling out OneDrive to some users as a replacement for mapped drives. It makes it much easier to share data between Enstone, Viry-Châtillon, from the track, or even when people are at home.”

Collaboration and communication tools are perhaps the most important piece of the software puzzle for the race team, which in 2017 will spend 230 days away from home. Starting in Melbourne in March and ending in Abu Dhabi in November, each F1 team must set up and break down their mobile headquarters 20 times. There’s a two-week gap between most of the races, but on five occasions this year teams have to pack up shop and move everything in less than a week.

On those back-to-back weekends, F1 teams have about 36 hours to disassemble everything—including the cars—and then transport about 50 tons of equipment—spare parts, fuel, tools, computers, food—to the next location. In Europe, for example between the Belgian GP on August 27 and the Italian GP on September 3, the teams will use a fleet of trucks; otherwise, you bundle everything onto a few jumbo jets.

Because the race calendar is known in advance, though, the en masse schlepping of stuff is actually the easy bit of F1 logistics. The difficulty comes from the unplanned movement of parts and people. “Every race, the car is different,” Geoff Simmonds, the race team coordinator says. It’s Simmonds’ job to work closely with colleagues to make sure everyone and everything arrives on time for the season’s 20 races, even when there might only be a few hours to get a new carbon fibre wing from HQ to the race.

Each track has different requirements—Monaco, with its tight corners, wants as much downforce as possible and grippy tyres; Monza, with its long straights, demands minimal drag and hard-wearing tyres—and thus cars are almost completely reconfigured and rebuilt between races. (In case you were wondering, the configuration for each is mostly derived from thousands of simulations back at HQ.)

Then, as the season develops and flaws are discovered and terabytes of race data is processed, fundamental changes to the car will be made. Those changes have to be put in the wind tunnel or run through CFD before they’re manufactured. Physically producing a new metal part is quite easy—like most F1 teams, Renault Sport Formula One Team has a factory with CNC milling and sintering machines under the same roof as the designers and engineers—but a new carbon fibre wing might take 10 days or more.

These new parts then need to be flown from HQ to wherever the race team is currently holed up. That is the tricky bit, logistically speaking. Simmonds walks me through a particularly extreme example: “I could send someone back tonight from Barcelona. I know the last flight out is at 9:30pm. They’d be back in Enstone by midnight. They could work overnight and then fly back in the morning with the new part in a suitcase. There’s a plane that gets in at 9:25am.”

Simmonds capitalises on the amount of real-time travel data that is now available to the public—but it’s surprising to learn that he doesn’t use any special or custom-made tools. He checks flight times and books tickets via the airline’s mobile app, and when planning a route for the trucks he loads up a satnav app to check for traffic. It’s a lot better than the olden days, when you booked flights via a travel agent (which closed at 6pm) and traffic information was scarce. Simmonds says he used to allow for 20 hours to drive a van between Enstone in Oxfordshire to Barcelona; now it’s closer to 17 or 18 hours, due to better vehicles, roadways, and information.

Another surprising aspect of F1 logistics is that there’s a large amount of camaraderie between the teams. While car designs and race tactics are top secret, “if I need something flown over, I’ll ask another team to see if they already have someone on that flight,” Simmonds says. F1 is a rather incestuous sport: there are only so many race team coordinators, Simmonds explains, and most of them are either friends or erstwhile colleagues, so lending a helping hand comes naturally.

Because there’s so much chatter and movement of talent between teams, though, nothing remains truly secret for very long in F1. To stay ahead, Simmonds says, you just have to keep getting better at what you do. Thanks to powerful computers and in-house manufacturing, it’s now possible to design and build a brand new F1 car in a matter of months—but that doesn’t mean the teams suddenly have a ton of spare time on their hands. If anything the teams are even busier as they try to iterate through as many designs as possible before the season begins.

“Everything is time critical,” Simmonds says. “Everyone works to the same goal in a highly parallel way. Everyone has their own timeline they have to hit. And then you just start bringing in more people and working longer hours.”

Making sense of the data

The larger Formula 1 teams have about 1,000 employees. By FIA decree, however, each team can only have 60 engineers and technicians trackside at each race. To circumvent that limitation, every team has a high-speed Internet connection back to HQ.

The connection speed in the pits this year is about 80Mbps, which connects a trackside compute cluster to a NASA-like “Mission Control” setup back home, says Bob Bell, CTO of Renault Sport Formula One Team. Some decisions are made by the trackside engineers, but others are handed off to those working remotely. (I like to think they run off to a blackboard, do some calculations, and then transmit the results back to the pits.)

Real-time telemetry from the F1 car is strictly regulated by the FIA, however. While there are around 200 high-frequency sensors on each F1 car, the wireless link back to the pits is capped at 2Mbps. Higher-resolution data (10Mbps) is stored on the car’s ECU (engine control unit), but they’re not allowed to touch that until the end of the race. For a brief time in the early 2000s the FIA allowed bi-directional communication between the car and pits, but it was quickly outlawed; today, all they can do is talk to the driver.

After driver skill and car performance, the interpretation of real-time car data is probably the most important part of finishing—and perhaps winning—an F1 race. António Félix da Costa, back when he was Infiniti Red Bull’s reserve driver, famously said that without real-time support, “you’d go for two laps and then stop and break down.”

One incident a few years ago, involving Lewis Hamilton, perfectly illustrates the potential of real-time telemetry. Hamilton’s engineers said over the radio that he had a puncture. “No, no, the car’s fine,” Hamilton replied. “No, really, come in, you’ve got a puncture,” the engineers insisted. And sure enough, he did indeed have a puncture, but Hamilton hadn’t felt it yet.

As you can imagine, though, deriving intelligence from those 200 sensors in real time is like looking for a needle in a haystack—and it doesn’t get any easier once the race has finished, either. “Going through that data, and finding what’s relevant, is a complex data science problem,” says Everest. Over an entire race weekend the team’s two cars produce 35 billion data points, he says. “Because there’s so much data, it’s hard to get your head around how you can best use that data.”

Importantly, this is an area that isn’t yet regulated by the FIA. Unlimited compute power, as many data scientists as you can afford, and exotic big data algorithms can be thrown at the problem. Renault Sport Formula One Team is experimenting with Azure Machine Learning—Everest says they’ve used AML to create an accurate tyre degradation model that is then used by the car simulator—but other teams are modelling different things with different cloud compute providers.

Other Formula 1 teams declined to tell me which areas they are hoping to bolster with machine learning; for now, it’s a sensitive area that might net major performance or reliability improvements over the next few years. “The more we learn about big data and machine learning,” says Everest, “the more possible applications we come up with.”

He warns that big data isn’t a one-stop solution, though: “You don’t just load up all your data and press ‘go.’ The media gives the impression that big data and machine learning can solve all your problems by looking for patterns and finding answers to questions that you didn’t know you had. But it’s not quite as straightforward as that.

“You’ve got to have a huge amount of domain knowledge about that data. You have to have data scientists, and they have to meet in the middle with the domain experts to obtain the best understanding of the data.”

Rushing towards the horizon

Things were definitely simpler, back before ubiquitous high-speed Internet access and high-performance computing. Simmonds says that in 1999, trackside Internet access consisted of a laptop with a PCMCIA modem. “Back then, it was ISDN.” A single ISDN line, consisting of two bonded telephone cables, usually clocks in at around 128Kbps. “Some teams had six ISDN lines,” he says, verging on reverie.

At the turn of the century there was just enough spare bandwidth for Simmonds to check the football results on Saturday. Now there’s enough bandwidth to concurrently download the latest race-critical data from Dynamics or SharePoint, video call the kids back at home, offer Wi-Fi access for any VIPs who might be visiting—and watch highlights from the Premier


By Sebastian Anthony
Source: Ars Technica UK

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *