Application Note 1: Modeling Radar Signature Of Real-Sized Aircraft Using EM.Tempo

From Emagtech Wiki
Revision as of 16:26, 13 October 2016 by Kazem Sabet (Talk | contribs)

Jump to: navigation, search
Application Project: Modeling Radar Signature of Real-Sized Aircraft Using EM.Tempo
ART AIR title.png

Objective: In this article, we explore computing RCS of electrically large structures, like aircraft.

Concepts/Features:

  • EM.Tempo
  • Radar Cross Section
  • Large Projects
  • Plane Wave Source
  • Cloud-Based Resources

Minimum Version Required: All versions

'Download2x.png Download Link: None

Introduction

In this application note, we will demonstrate how EM.Tempo can be used to compute the bistatic radar cross-section (RCS) of a large-scale target such as the Dassault Mirage III type fighter aircraft at an operating frequency of 850 MHz. A high-fidelity mesh of a structure like this involves tens or hundreds of millions of cells. As the operating frequency is increased, so does the size of the computational problem. Throughout the article, we will discuss some of challenges encountered when working with electrically large models. You can learn more about the basic procedure for setting up an FDTD RCS simulation in "EM.Tempo Tutorial Lesson 2: Analyzing Scattering From A Sphere".

Computational Environment

The Mirage III CAD model has an approximate length of 15m, a wingspan of 8m, and an approximate height of 4.5m. Expressed in free-space wavelengths at 850 MHz, the approximate dimensios of the aircraf model are 42.5 λ0 x 22.66 λ0 x 12.75 λ0. Thus, for the purposes of EM.Tempo, we need to solve a region of about 12,279 cubic wavelengths. For problems of this size, a very large CPU memory is needed, and a high-performance, multi-core CPU is desirable to reduce the simulation time.

Amazon Web Services allows one to acquire high-performance compute instances on demand, and pay on a per-use basis. To be able to log into an Amazon instance via Remote Desktop Protocol (RDP), the EM.Cube license must allow terminal services. For the purpose of this project, we used a c4.4xlarge instance running Windows Server 2012. This instance has 30 GB of RAM memory, and 16 virtual CPU cores. The CPU for this instance is an Intel Xeon E5-2666 v3 (Haswell) processor.

Importing the CAD Model & Simulation Setup

The CAD model used for this simulation was obtained from GrabCAD, an online repository of user-contributed CAD files and models. The imported CAD model has the "IGES" file format. After being imported to CubeCAD, the Mirage model is initially moved to a new perfect electric conductor (PEC) material group in EM.Tempo.

  • The complete CAD model of Mirage aircraft imported to EM.Tempo's project workspace.

For the present simulation, we model the entirety of the aircraft, except for the cockpit, as PEC. For the cockpit, we use EM.Cube's material database to select one of the several glass of types with εr = 6.3 and σ = 0.017 S/m.

  • Selecting glass as cockpit material for the Mirage model.

Since EM.Tempo's mesher is very robust with regard to small model inaccuracies or errors, we don't need to perform any additional healing or welding of the model.


Adding an RCS observable for the Mirage project

Observables

First, we create an RCS observable with one degree increments in both phi and theta directions. Although increasing the angular resolution of our farfield will significantly increase simulation time, The RCS of electrically large structures tend to have very narrow peaks and nulls, so the resolution is required.

We also create two field sensors -- one with a z-normal underneath the aircraft, and another with an x-normal along the length of the aircraft. The nearfields are not the prime observable for this project, but they may add insight into the simulation, and do not add much overhead to the simulation.

Planewave Source

Since we're computing a Radar Cross Section, we also need to add a planewave source. For this example, we will specify a TMz planewave with θ = 135 degrees, φ = 0 degrees, or:

[math] \hat{k} = \frac{\sqrt{2}}{2} \hat{x} - \frac{\sqrt{2}}{2} \hat{z} [/math]

Mesh Generation & FDTD Simulation

For the mesh, we use the "Fast Run/Low Memory Settings" preset. This will set the minimum mesh rate at 15 cells per λ, and permits grid adaptation only where necessary. This preset provides slightly less accuracy than the "High Precision Mesh Settings" preset, but results in smaller meshes, and therefore shorter run times.

At 850 MHz, the resulting FDTD mesh is about 270 million cells. With mesh-mode on in EM.Cube, we can visually inspect the mesh.

  • Mesh settings used for the Mirage project.
  • Mesh detail near the cockpit region of the aircraft.

For the engine settings, we use the default settings, except for "Thread Factor". The "Thread Factor" setting essentially tells the FDTD engine how many CPU threads to use during the time-marching loop.

Engine settings used for Mirage project.

For a given system, some experimentation may be needed to determine the best number of threads to use. In many cases, using half of the available hardware concurrency works well. This comes as a result of there often being two cores per memory port on many modern processors. In other words, for many problems, the FDTD solver cannot load and store data from CPU memory quickly enough to use all available threads or hardware concurrency. The extra threads are idling waiting for data, and a performance hit is incurred due to increased thread context switching.

EM.Cube will attempt use a version of the FDTD engine optimized for use with Intel's AVX instruction set, which provides a significant performance boost. If AVX is unavailable, a less optimal version of the engine will be used.

After the sources, observables, and mesh are set up, the simulation is ready to be run.

The complete simulation, including meshing, time-stepping, and farfield calculation took 5 hours, 50 minutes on the above-mentioned Amazon instance. The average performance of the timeloop was about 330 MCells/s. The farfield computation requires a significant portion of the total simulation time. The farfield computation could have been reduced with larger theta and phi increments, but, as mentioned previously, for electrically large structures, resolutions of 1 degree or less are required.

Simulation Results

After the simulation is complete, we can see the RCS pattern as shown below. We can also plot 2D cartesian and polar cuts from the Data Manager.

  • RCS pattern of the Mirage model at 850 MHz in dBsm.
  • XY cut of RCS
  • ZX cut of RCS
  • YZ cut of RCS

The nearfield visualizations are also available as seen below:

  • Large struct article ScreenCapture1.png
  • Large struct article ScreenCapture2.png


  • XY Cut of RCS is dBsm
  • ZX Cut of RCS is dBsm
  • YZ Cut of RCS is dBsm