Print Friendly, PDF & Email

With aircraft becoming sensor-rich and computerised, more than ever before, the cockpit transforms into a sophisticated and automated human-machine system, that translates machine sensing to human situation awareness, and human commands into machine actions.

Despite the orthodox approach to flying, military aviators cannot escape the changes we all experience in our daily life, changes that shape the way pilots fight, win and survive in future conflicts. Since the beginning of aviation, a unique place was reserved for the pilot – combining the ergonomics (seat, windshield), flight instruments (gauges in the instrument panel) and flight controls (pedals, stick, and throttle). With those indications, the pilot assessed the aircraft situation and, by looking outside, they developed ‘Situation Awareness’, the essential perception of flying and aerial fighting. This simple representation was enough for fighter planes in World War One, where airplanes operated in close formations and relied on hand signals for co-ordination. In World War Two, radio communications and radar introduced new dimensions to air combat, enabling the air flights to operate as part of a bigger plan, centrally controlled and following a ‘master plan.’ In the mid-1960s, the appearance of missiles introduced another change, exposing aircraft to overmatch at much longer ranges, challenged by missiles with manoeuvrability levels beyond human tolerance.

The Glass Cockpit

The aircraft and weapons of that generation were limited to attacking the enemy in the direction of flight. While, over the years, missiles evolved to become smarter, more manoeuvrable and accurate, they have always relied on the pilot to acquire the target. The gimbals of missile seekers of the first generations were limited in movement, which required manoeuvring the aircraft to bring the target to their field of view. Highly developed pilot skills were also critical to hitting targets with bombs.

The cockpit of an F-35 LIGHTNING is the most advanced manned-machine environment designed to date. (Photo: US Navy)

The first generation of radars and weapons each used specific displays for operation. Howeverwith the introduction of multi-function displays (MFD), the fighter cockpit organised in a two or three display ‘glass cockpit’ operated by a complex array of switches, levers and triggers operated by the Hands On Stick and Throttle (HOTAS) concept. Helicopters and transport planes also adapted MFDS in a horizontal line, comprising three or four units. These displays depicted the information driven from the sensors on board.
The Head-Up Display (HUD) was an essential capability for both air/air and air/ground tasks, as it displayed the missile’s seeker field of view, or a continuously computed impact point, depicting the point where the bombs were likely to hit. Presenting this information in front of the pilot, along the flight path, enables the pilot to manoeuvre the aircraft to score a perfect hit. A typical HUD is BAE Systems’ LiteHUD, a modern, lightweight, small and compact HUD, which utilises the Digital Light Engine (DLE), a digital display that replaces the traditional Cathode Ray Tube (CRT) used in legacy analogue HUDs. New waveguide optics deliver a compact, low-profile design, which can be integrated into most cockpits, from turboprop trainers and fighters to transport aircraft. With its low profile, it can also integrate with the wide area displays characteristic of modern fighter cockpit designs.

Evolving Fighters Displays

BAE Systems´ TEMPEST “wearable cockpit” utilises 3D audio and voice command and means for haptic feedback sensors, embedded in gloves, which give the tactile impression of touching real buttons and switches. (Photo: BAE Systems)

By the 1990s, all-aspect missile seekers, active radar guided air/air missiles and precision-guided air/ground weapons were introduced, enabling more flexible target engagement. With improved sensors carried on board, such as targeting pods and airborne radars, pilots could also locate and designate targets off boresight.

The means to leverage this capability were HUDs with a wide field of view, offering enhanced eye motion, and head-mounted cueing systems, a kind of helmet mounted sight, which tracked the pilot’s line of sight and directed the sensor or seeker to cue to this direction. Such systems enabled pilots to leverage the full potential of all aspect air/air missiles, by offsetting an opponents’ maneuverability with an all-aspect missile with better kinematics.

HUDs and helmet mounted cueing systems combine head tracking systems and display systems using computer-generated vector graphics, to present information to the aircrew in any direction they look.

The F-35 cockpit represents the most advanced manned-machine environment designed to date. It is the first fighter plane that has no HUD, and the first to use a single large display screed (LAD) providing a large display area for the presentation of flight information and a tactical display. The LAD represents a departure from the traditional three-multifunction display ‘glass cockpit’ design, where each display depicts a specific sensor page. Complementing the LAD is the helmet-mounted display and a network of distributed aperture systems (DAS) providing the pilot 360 degree visibility from the cockpit, even under total darkness or in conditions when the aircraft fuselage obstructs the view. The original DAS was designed by Northrop Grumman, while Raytheon produces a different version, slated to enter the future production lots. The helmet mounted display is produced by VSI, a Joint Venture of Elbit Systems and Collins Aerospace.

Existing fighter planes undergoing modernisation also adopted this approach to some extent, by replacing the multi-MFD glass cockpit with LAD displays or replacing parts of the legacy instruments with new consoles offering a maximum display area. The F-16 central Pedestal Display (CPD) is one such concept used in the latest versions of the F-16, including the F-16V.

Innovative design features from BAE Systems rest on the idea of the ‘wearable cockpit’ – utilising augmented reality to build fully functional virtual displays and instruments in front of the pilot’s eyes by using a semi-transparent augmented reality display, which projects virtual objects on top of the live scene, 3D audio and voice command and means for haptic feedback sensors, embedded in gloves, give tactile feedback of touching real buttons and switches. Other control methods could employ eye tracking, neural control technology and physio- and psychometric sensors, which will constantly monitor the pilot health and mental capacity. In an event where the pilot misses track of an important piece of information or loses consciousness due to high G or low oxygen, the mission computer or autopilot could automatically take control to save the situation. The new concept was presented in public for the first time in 2018, as part of the introduction of the TEMPEST future combat aircraft concept.

Improving the Helicopter Cockpit

Aviator Night Vision Systems (ANVIS) and Helmet Mounted Displays (HMD) are used to deliver a complete picture, such as a night view, generated by image intensification system or Forward Looking Infrared (FLIR). More recently, advanced HMD generating colour video displays are used to deliver more information to pilots.

Today, such display systems provide much more than night vision. A miniature display and optics combiner attached to a helmet mount or ANVIS provides helicopter pilots with a vector graphics symbology and raster images, displayed through the ANVIS HUD system. Such devices become a critical ‘enhanced vision’ device, improving the flight safety at a low level, and handling brownout condition, when landing in desert terrain or snow-covered landscape.

In these conditions, a thick dust or snow cloud blocks the pilot vision near ground level, at the most critical phase of the landing. These systems are as good as the sensors with which they operate. To improve performance in these conditions, Elbit Systems’ developed the BRITENITE sensor, covering an ultra-wide field of view with an array of thermal imaging sensors delivering a clear image of the area of degraded visual environments (DVEs), such as under thick smoke or dust cover. Other concepts utilise repurposing existing sensors, such as infrared threat warning devices already used on helicopters, or distributed aperture systems (DAS) derived from other platforms such as the F-35, for use as a panoramic or spherical (360 degree) viewing system.

The Pilotage Distributed Aperture Sensor (PDAS) developed by Lockheed Martin has recently made the first flight on the Bell V-280 VALOR tilt-rotor aircraft. PDAS consists of six infrared sensors distributed around the aircraft and linked to aircrew helmets and cockpit displays through an open-architecture processor (OAP). Although the system currently supports two users (aircrew), it will ultimately support up to six users, enabling transported troops to survey the environment for tactical information and threats. Other enhancements will include integration with Multi-Modal Sensor Fusion (MMSF), a multi-sensor feed that helps to restore aircrew situation awareness in DVE and enables navigation in GPS-denied zones.

Other changes to the helicopter cockpit are being explored under DARPA’s Aircrew Labour In-Cockpit Automation System (ALIAS) programme. Through the DARPA ALIAS programme, Lockheed Martin Sikorsky are developing systems intelligence that will give operators the confidence to fly aircraft safely, reliably and affordably in optimally piloted modes enabling flight with two, one or no crew.

ALIAS envisions a tailorable, drop-in, removable kit that would emplace high levels of automation into existing aircraft, enabling operations with a reduced onboard crew. The system uses the Matrix system, a capability toolkit, which includes multi-spectral sensors, hardware, and software to enable scalable automation. As an automation system, ALIAS supports the execution of an entire mission, from take off to landing, even in the face of contingency events such as aircraft system failures. To cope with such emergencies, ALIAS uses persistent-state monitoring and rapid recall of flight procedures.

Sensor-Based Situation Awareness

Situation awareness (SA) is a basic and most critical aspect, specifically for air combat. The air aces of the past were those who could leverage Situational Awareness to quickly seize the moment, take advantage of the situation and implement the right manoeuvre and weapon to score a kill. In the jet and missile age, where the reach of the platform, sensors, and weapons by far exceed human sensing, Situational Awareness is a combined and shared resource, just as much as the aircraft and the weaponry they carry.

The battlespace is constantly monitored and mapped by imaging sensors, radars and identification systems (Interrogator Friend and Foe – IFF) from different aircraft and ground radars. Some of the sensors may be aircraft positioned far away from the scene, or ground-based sensors assisting the flight. Others could be stealth planes or unmanned platforms, inserted deep inside enemy airspace to provide an inner view of the situation.

Individual fighters would get a segment of that sky picture, as viewed around the aircraft. Such a picture indicates the location of all friendly assets, known enemy missile defended areas, civilian aircraft tracked by radar and air traffic control (ATC), unidentified objects or those identified as enemy planes, which could threaten the mission and friendly force.
Each pilot watches a segment of that common picture, augmented with information obtained from the sensors on board, such as radars, electronic support measures, missile launch warning systems or infrared search and track (IRST) and other electro-optical systems.

Each of these sensors provides critical information to create situation awareness, enabling the pilot or crew to respond to evolving threats promptly – for example, pursue an attack against a pattern of signals that could indicate a hostile action. Such actions could vary from evasive manoeuvres and flare release to the deployment of electronic warfare, and a pre-emptive strike by high-speed anti-radiation missiles.

Connecting a situational picture to centralised decision making is part of the Resilient Synchronised Planning and Assessment for the Contested Environment (RSPACE) system BAE Systems develops under a DARPA contract. The project seeks to develop human-centered software decision aids, which can assist air operators in improving operational control in a complex battlespace. As part of the RSPACE programme, the company developed the Distributed, Interactive, Command-and-Control Tool (DIRECT) to improve air battlespace awareness. DIRECT uses a visual interface to generate real-time alerts for operators to evaluate areas of concern during the planning and execution of a mission. The software also automatically adjusts to minimise bandwidth when communications are either limited unreliable to assist in mission continuity and completion.

Fusion Renders a Better Picture

A new challenge for military pilots is information flooding. As aircraft become connected, the amount of information generated on board is staggering. With the information obtained from local sensors and constantly streaming from sources across the network, situation awareness is no longer a matter of practices and disciplines. It has become a complex technological and system integration challenge that addresses the information flow, data fusion and prioritisation, human-machine interface, and ergonomics. Current developments also leverage machine learning and artificial intelligence, to bring the information to a humanly manageable workload.

BAE Systems’ “wearable cockpit” utilises augmented reality to build virtual displays and instruments in front of the pilot’s eyes by using a semi-transparent augmented reality display that projects virtual objects on top of the live scene. (Source: BAE Systems)

The electronic hardware and processing power available to system designers in the past could not support the integration of multiple, analogue sensor feeds onto a common screen, necessitating the use of many displays to monitor all sensors. The integration of all this information into a single situational picture was impractical and often required two crew members to handle the workload. The introduction of data buses and digital architecture in fourth generation fighters introduced situational displays, which could depict a clear battle plan. However, sensor management still required pilots to flip through many ‘pages’ to perform specific tasks. Critical indications, such as threat warning, remained with a unique display, which added audio signals to increase attention.

Managing an aerial picture on a small cockpit display is a difficult task, particularly when the situation is unfolding at high speed, long range, and involving large formations. That is where large area displays (LAD) come in handy. First used in the F-35 LIGHTNING II, LAD has now been implemented in several aircraft models, for example, in a vertical display placed on the central pedestal display for the F-16 (as used in the latest F-16V configuration), F-15X, GRIPEN E and future variant of F-18.

In a modern application such as the F-35, LAD is combined with helmet displays, as the mission and display computers manage the dataflow to display only the information essential to the pilot at the specific time, resulting in reduced workload, which enables a single pilot to perform the mission previously handled by a full crew. Such a combination enables the pilot to fly ‘heads out’ in the engagement phase and use the LAD display when the situation enables or requires planning, situation assessment, and management of remote assets.

The Elbit BRIGHTNITE sensor delivers a clear image of areas under thick smoke or dust cover. (Photo: Elbit)

Another advantage of a helmet display and the cueing system is the ability to designate objects by a co-ordinated eye view (line-of-sight) and hand controls. Before such helmet cueing systems were used, pilots had to manoeuvre the aircraft to bring the head up display to the point of interest to designate a target. However, with helmet display sights, all that is necessary is to look at it, and the target is locked-on with a click of a button.
Modern helmet displays also enable the integration of raster (bitmaps) on the eyepiece or visor, displaying images and live video as a ‘window on the world’ view. A small window can be used to assist in target recognition and identification, using the aircraft telescopic electro-optical targeting systems such as targeting pod or infrared search and track (IRST) as sensors. On a night flight, the whole visor can show the FLIR images obtained from navigation FLIR or Distributed Aperture System (DAS), to enable the pilot to fly ‘heads out’ at night as they see the terrain and fly as in clear daylight, avoiding other aircraft, terrain, and threats in the vicinity, without dependency on traditional flight instruments.
To further reduce workload, designers can turn to new sensory channels, such as 3D audio and tactile sensing. 3D audio relies on the human capacity to differentiate audible cues by the direction from which they come. As such, directional audible alerts generate spatial cues, which are processed by the brain without effort. A major limitation of 3D audio is the brain’s inability to process multiple audible cues simultaneously. Therefore, this method can be used only for critical alerts. A tactile channel can carry multiple cues simultaneously, using a new and patented cuff, basic instructions such as coarse directional cues, radio channel confirmation, etc., can be transmitted and processed intuitively even under extreme stress and workload.

Helmet display enables capabilities never used in aviation. Projected on transparent visors positioned near the eye, the display superimposes digital objects on the scene. On such images, the sky view and terrain are augmented with virtual objects to show navigational waypoints, depicted as ‘highway in the sky.’ Fixed or computed obstacles, such as wirelines, antennae, and buildings, or computed obstacles, such as ridgelines are also represented to warn the pilot of flight safety hazards. Dynamic objects, such as friendly aircraft and hostile SAMs are also displayed as virtual objects, represented as ‘domes’ indicating their effective coverage. Such presentations enables the pilot to circumvent those threats just as other obstacles. To fly their planes through this virtual world, the pilot needs to obtain a good view of the real outside world with safe paths correctly presented on it in real time.

The Advanced Cockpit of Elbit Systems of America optimises tactical situational displays, processes advanced applications and provides high-definition formats for complex sensor video presentations. (Photo: Elbit)

Elbit Systems has developed the SUPERVISION concept to presenting data to the aircrew in an intuitive manner, reducing crew workload and improving pilot response. Information layers, including vision systems depicting the outside world or night vision view sensed by night vision or thermal imagers, information layers including terrain, known obstacles, threats indicating the airspace coverage and recommended flight path are all superimposed and correlated on the world view. By fusing the various layers of information, SUPERVISION superimposes information on the world view projected on the visor, enabling ‘head out’ flight in any terrain and under all visibility conditions. Innovative display solutions utilise HUDs or HMD, relieving aircrews of the mental load of having to interpret the data. Utilising the spatial orientation of information elements to enhance situation awareness enables the aircrew to focus on the essential elements of the mission in hand. With advancing technologies, the concept of SUPERVISION will continue to evolve toward a networked service comprising multiple airborne or air and ground elements co-operating on a mission, delivering a shared, complete 3D situational picture to all participants involved.

Aircraft with Artificial Intelligence (AI)

Integration of artificial intelligence in the cockpit would offer major advantages for the air force in general, yet significant challenges for pilots that have relied exclusively on their skills and senses for decision making. AI capabilities were proven in studies conducted in flight simulators, where human pilots fought opponents flown by AI piloted or assisted machines.

Advanced cockpit of a new generation F-15 (Photo: Elbit)

One approach harnesses ‘connected decoys,’ namely offboard electronic warfare decoys and ‘defensive missiles,’ which act as baits, luring enemy missiles to divulge their guidance strategy, enabling the defender to optimise evasive manoeuvres and countermeasures to defeat them. Such methods would enable manned and unmanned combat aircraft to increase survivability and gain the upper hand when dealing with enemy fighters and surface-to-air missiles in defensive and offensive actions. An Israeli research team, led by Technion Prof. Tal Shima, studied this Co-operative Evasion and Pursuit concept, under a US Air Force Research Lab study.

ALPHA, an AI application, developed by Psibernetix Inc., the US Air Force Research Lab and the University of Cincinnati, also demonstrated the advantage of AI. It uses an AI method called ‘Fuzzy Logic’ to perform air battle management decisions in a fraction of second and select the best plan with the maximum probability of success. In its earliest iterations, ALPHA consistently outperformed other computer programmes in air combat simulations. After the system matured, it was pitted against seasoned manned pilots and won every match. The ALPHA is so fast that it could consider and co-ordinate the best tactical plan and precise responses, within a dynamic environment, over 250 times faster than ALPHA’s human opponents could blink.

Eventually, ALPHA aims to lessen the likelihood of pilot’s tactical mistakes since its operations already occur significantly faster than other language-based applications. An ALPHA based virtual ‘co-pilot’ can take in the entirety of sensor data, organise it, create a complete mapping of a combat scenario and make or change combat decisions for a flight of four fighter aircraft in less than a millisecond.

Eventually, future air combat will require reaction times that surpass human capabilities. Such engagements will integrate AI wingmen utilising Unmanned Combat Aerial Vehicles (UCAVs) as the tip of the spear, teamed with manned aircraft likely in stealth fighters and bombers, assuming the battle management roles. Such systems would be able to process situation awareness, determine reactions, select tactics, manage weapons use and more. Such AI derived systems could simultaneously evade dozens of hostile missiles, take accurate shots at multiple targets, co-ordinate actions of squad mates, and record and learn from observations of enemy tactics and capabilities.

AFRL has been experimenting with the “unmanned wingman” concept for some time, using both surrogate manned aircraft and unmanned aircraft with manned fighters fitted with Lockheed Martin’s Open Mission System (OMS) providing the integration for the manned-unmanned mission. Among the test programmes conducted in 2017 were HAVE RIDER II that explored an experimental F-16 aircraft, acting as a surrogate Unmanned Combat Air Vehicle (UCAV), autonomously reacting to a dynamic threat environment during an air-to-ground strike mission.

Distributed operations of this kind rely on robust communications, which link between all members at all times, even under extensive countermeasures. On another test, Lockheed Martin demonstrated how aircraft equipped with OMS could exploit on board communications to link and transfer information between different platforms, using standard Link 16 and covert communications used on stealth platforms (such as the F-22). This capability was demonstrated by a U-2 that provided multi-domain command and control, sharing data across dissimilar platforms in the denied environment.

Pursuing another research area, DARPA is developing the system of systems integration technology (SOSIT) connecting between battle managers, manned aircraft and unmanned wingmen. Boeing, General Dynamics, Lockheed Martin, and Northrop Grumman are developing and analysing, promising architectures and designing plans for flight experimentation. Apogee Systems, BAE Systems, and Collins Aerospace are developing tools and technologies to enhance open-system architecture approaches. Such systems are developed from the beginning to be robust and resilient to cyberattacks.

The programme has already demonstrated the ability to process sensor data in different manners automatically. For example, the F-35 radar was linked with DARPA’s Automatic Target Recognition software, to reduce operator workload and create a comprehensive picture of the battlespace. Data-rich messages were then transferred over existing datalinks, sharing information between different systems on various platforms. Such links also enabled connection of ground-based cockpit simulators with live aircraft systems in real time, enabling users to reduce data to decision timelines.

With advanced information sharing, manned/unmanned teaming would reduce the human crewmembers’ cognitive burden and workload, allowing the warfighter to focus on creative and complex planning and management. Each drone will have sufficient onboard autonomy to complete all basic flight operations untethered from a ground station and without full-time direction from the manned lead.

The US Air Force has already begun flying aircraft equipped to control a full ‘Loyal Wingman’ drone formation and is planning to test the concept through 2022. Eventually, assigning drone formations to all front-line aircraft – the F-35 LIGHTNING II, F-22 RAPTOR and possibly its newest bomber – B-21 RAIDER. Combined with next-generation Very Long-Range Air/Air Missiles (VLRAAM) launched from long distances, and Beyond Visual Range Air/Air Missiles (BVRAAM) engaging the enemy beyond visual range, such formations are expected to overmatch existing and future enemy fighters and air defence networks.

Tamir Eshel is a security and defence commentator based in Israel.

Teilen