S3HIFT -

Cyber security measures for automotive and medtech products
19.10.2024
Consulting, Cyber Security

2. Basics

2.1 Threat Assessment and Risk Analysis (TARA)

Threat Assessment and Risk Analysis (TARA) is an integral part of the development process when developing (security) critical systems. It is a comprehensive method for the systematic identification, assessment and management of potential threats and risks associated with a product. By using TARA, we gain a deep understanding of the product's vulnerabilities and dependencies.

This process allows us to anticipate the types of attackers that could target the product and the various threats that could arise. Through a detailed risk analysis, we can prioritize these threats and take effective countermeasures to reduce the risk to the product and its users. For the push device, TARA provides a framework that ensures all potential security vulnerabilities are identified early and proactively addressed, improving the overall security posture of the product.

2.2 Fuzz testing

Fuzz testing was developed at the end of the 1980s by Barton Miller at the University of Wisconsin-Madison to test the robustness of UNIX programs. The basic idea is to test the programs with the help of random inputs in order to detect undesirable system behavior or even system crashes. As many vulnerabilities have been identified through fuzz testing, this test approach has proven its worth and is now part of software development - or at least it should be. Various methods have been developed for generating suitable input data, which we will discuss later.

The advantage of fuzz testing is that security gaps and vulnerabilities are found that have not been taken into account by other tests. Even a systematic TARA does not provide this type of vulnerability analysis. In this respect, fuzz testing is a valuable addition.

These insights allow us to refine our security requirements and ensure that the product is robust enough to withstand potential attacks. In addition, we can use fuzz testing to verify that the product maintains its functionality and reliability even in the face of unexpected challenges. This comprehensive safety concept not only increases the reliability of the product, but also strengthens customer confidence by ensuring that their needs are met safely and effectively.

3. Secure development process

By integrating TARA and fuzztesting intoa secure Developmentprocess , we ensure that the system to be developed is resilient to security threats.

Figure 1: TARA process screen

The core of the Secure Development Process is the TARA, as shown in Figure 1. The first step is to collect and document information about the system. The second step is to analyze which assets need to be protected. The third step is to identify attack scenarios and potential attackers. Once the attack scenarios are known, a risk assessment can be carried out, which in the final step derives defensive measures, such as additional security requirements.

A feedback loop is also integrated into the TARA process, which ensures that new findings, such as a new vulnerability in the operating system, are considered or risk assessments are updated (e.g. changed probability or extent of damage) .

Figure 2: Fuzz testing sub-process

Fuzz testing is a sub-process of TARA in order to discover further possible vulnerabilities or security gaps (process step 1: collect system information). If vulnerabilities or security gaps are identified, this information is used to determine the attack scenarios.

The fuzz testing process first uses an evaluation model to determine the system location where an attack is most likely to occur. This is similar to an apartment or a house: burglars will choose the easiest route and not dig a tunnel and drill through the foundations. The homeowner can determine which route is most likely by carrying out an analysis: How secure are the entrance, cellar and patio doors and how secure are the windows or is there a ladder in the garden that the burglar can use to get up to the second floor? The same applies to technical systems: Which interface(s) is an attacker most likely to use? In this consideration, we include possible side channels through which system information flows that an attacker or we as fuzz testers can use.

These interfaces and side channels represent the ideal configuration for carrying out fuzz testing. The basic process for this is:

  1. Generator to generate input data
  2. Test automation & injector with which the SUT (system under test) is controlled
  3. Monitor & measurement system for measuring system outputs & side channels
  4. Analyzer (AI-based) to identify anomalies (possible system vulnerabilities )

A second feedback loop is created by using the analysis results to generate new input data. If an anomaly is detected, the input data can be varied in a targeted manner. By varying or expanding the input data, it is possible to test whether other potential vulnerabilities exist.

We will go into the technical details of implementation using the "Push Device" example later

4. Tools & tool chain

Random-based and automated fuzztesting provides large amounts of data, especially test data. In order to cope with this,the data of the secure development process must be managed with suitable toolsso that it can be read, analyzed and trackedby humans . We plan to use the following tools for this purpose:

4.1 Enterprise Architect modeling tool

Enterprise Architect is used to create the following models:

  • TARA model: Documentation of assets, threats (attack scenarios, risks and derived security requirements )
  • Evaluation model: Determination of the most likely attack path of an attacker
  • Attack model: Model-based procedure for the generation of inputs (= attack scenarios )

4.2 Model-based test tool MBTsuite

Model-based testing with MBTsuite is a proven tool from sepp.med,which can be used togenerate a large number of test cases from a test model and automate.It is therefore ideal for carrying out or visualizing fuzztesting. By changing the test model , the testers can change the test sequence and varywithout having extensive programming knowledge for test automation.

You can find more information about MBTsuite here: MBTsuite - "Model-Based Software Testing made easy"

We use MBTsuite togenerate test cases from the attack model as well as for the evaluation model in order to determinethe "testability level" (probabilitymeasure for an attack) for the configuration variants.

4.3 ALM tool Jira + plug-ins

All relevant data should then flow into Jira, which we want to use as an ALM tool. Plug-ins such as XRay should support test management.

Evaluation and analysis functions are implemented here with dashboards and reports to ensure analysis and traceability.

4.4 Python programming language

Test automation takesplace using the Python scripting language. The MBTsuite generates executable Python scripts that are executed. The test specification, test scriptsripts and test results are transferred to the ALM tool.

5 Technical implementation

A test system was developed for the research project that provides the fuzzing methodspresented in 5.2. and performs the automated evaluation using AI. This system was implemented in Python. The test system can be configured for the various side channels and response data and can therefore be used for different devices to be tested.

5.1 Structure

The developed system consists of the following components:

  • Fuzz engine: This component is responsible for generating the fuzzing message and, depending on the configuration, generates different input data that is sent to the device under test via the configured interface .
  • Data Manager: The Data Manager manages the data generated. This includes the data supplied by the device and the data supplied by the side channels.

This data is passed on to the evaluation system and also saved in files in various formats, depending on the configuration. The data can also be stored in a database and is therefore also available for subsequent evaluation.

  • Monitoring: This component evaluates the data supplied by the device and detects any anomalies using AI algorithms. The results are sent back to the Data Manager and recorded on the one hand and used as feedback to the fuzz engine on the other.
  • Instrumentation: This module implements the various measuring devices such as an oscilloscope for current measurement, possibly a thermal imaging camera or thermal sensors and the like. The data obtained from the instruments is recorded via the Data Manager and forwarded to the monitoring system for evaluation.

5.2 Fuzztesting and fuzzing methods

Fuzz testing is an attempt to attack the interfaces of the device using randomly generated tokens. In the medical sector, the main interfaces that come into question here are USB ports, Bluetooth and network connections.

The tokens can be generated in various ways. We have researched and implemented the following methods:

  • Generative: the token is generated as a random character string of a certain length and sent to the device.
  • Evolutionary: A randomly generated character string is mutated and sent to the device
  • Mutative: A predefined character string is randomly mutated and sent to the device
  • Keyword-based: A random sequence is selected from a list of keywords and sent to the device.

These methods can also becombinedwith each other to find further points of attack. In addition, the feedback loop described above is implemented from the evaluation to the generator, which we refer to as feedback-based and generates further combinations of input data

6. Areas of application

The system we have developed is particularly suitable as an additional test method in module testing and system testing. In the module test, the defined interfaces can be traced with the fuzzing tokens and thus the reaction of the module to unexpected inputs and signals can be tested. This serves to harden the components to be tested against external attacks and thus increases the security of the system being developed.

In the system test, the fuzzing tokens can then be sent to the external interfaces such as USB, Bluetooth or network interfaces, thus hardening the entire system against attacks. Thanks to the automatic evaluation by AI, we achieve a high benefit with little additional effort. 

7 Possible applications and limitations

Thanks to the extensive automation of the fuzzing tests and the evaluation by AI, we have implemented a largely automated test procedure that can be integrated into the test process with little effort. However, this procedure does not replace the classic test methods by any means, but it is an effective addition that we can use effectively.

Since the AI has to be trained and as much data as possible is required for this, the method has its limits where this data (both valid and erroneous data) cannot be generated for training the AI or can only be generated with difficulty. We have developed and implemented methods with which this data for medical devices can be generated and recorded largely automatically in order to use it as training data for the AI.

This method is not limited to medical devices, but can also be transferred to other industries. We have demonstrated this by transferring fuzz testing from the automotive sector to the medical sector.

8. Summary

With the S3HIFT project, we have shown that a cross-industry approach has been defined in terms of processes and implemented in terms of tools. The automated test procedure enables the early detection of security gaps and vulnerabilities in the development process, significantly improves system hardening and avoids a large number of problems in the operation of products and systems.