[rank_math_breadcrumb]

"Cipher Data Centric Security

by | July 11, 2023 | Technology

TERATEC 2023 - Thierry Leblond

1st June 2023

Introduction

Historically, all IT solutions are schematically built on a web architecture known as "three thirds":

  • 1st tier: a web browser to manage the presentation
  • 2nd third party: a server that manages the request and processes and accesses the data
  • 3rd tier: a database that manages data 

In such a world, security has long been treated as an issue of network perimeter control: the good guys are inside the corporate network, and the bad guys are outside. As for the Internet, from a security point of view it is considered to be the ultimate evil.

Over the past 20 years, with the arrival of mobile terminals and the widespread use of the Cloud, and with the acceleration of telecommuting over the past 2 years, the Internet has become the only professional network. The threats are roughly of three kinds:

  • confidentiality, which the General Data Protection Regulation has taken well in hand;
  • data integrity, of which ransomware is the main example today;
  • and finally, the most serious threat to organizational sovereignty, extraterritorial laws and the massive trawling of corporate data for the benefit of spy states.

In a world where data is transmitted unencrypted, data security is a real problem. What can be done to solve the problem ?

Thierry Leblond, CEO Scille Parsec

Regulations and standards are emerging. Let me mention three of the most recent: 

  • The European NIS2 directive will come into force in December 2022 and will apply until the end of 2024;
  • The US DoD's Zero Trust strategy, released in December 2022, and the NIST (National Institute of standards and technology) zero trust standard, released in August 2020;
  • NATO standardization on Data Centric Security 2020.

We can quickly draw the following conclusions: 

  • data security must be cryptographically managed as close as possible to the user;
  • This crypto protection must cover all the basic security functions: confidentiality, integrity, non-repudiation, authenticity, anonymization, traceability, archiving, revocation, etc.

The world of end-to-end cyber protection of sensitive data therefore seems to us to be converging on new fundamentals:

  • The "zero trust " principle: "always check and never trust".
  • The "zero knowledge " principle: encapsulate data in cryptographic enclaves guaranteeing all security functions: confidentiality, integrity, authenticity, traceability, history, revocation.
  • The "Cypher Data Centric Security hearty :
    • ensure data security as close as possible to the user and the device;
    • cryptographically control the flow of data between enclaves and organizations, and guarantee compliance with classification and authorizations.

In a few words, end-to-end encryption (E2EE) is becoming a crucial defensive weapon at the heart of future geopolitical and sovereignty issues. 

I) Mr. Everyman's information system

When it comes to IT cybersecurity, we need to consider the attack path and the system's vulnerabilities. Where will the attacker be positioned?

Here's Mr. Anybody's infrastructure: Users Alice, Bob and Charlie work on Devices that communicate with remote servers and remote clouds via either Internet or intranet perimeter networks.

The first attack targets Users: this is the domain of social engineering. In technical terms, the countermeasures are strong authentication of the "Multi Factor Authentication" or MFA type, with the whole range of biometric-based solutions, or single-use tokens. These technologies are mature.

The second attack targets Devices: this is the domain of viruses and zero-day vulnerabilities. Today, the answer lies in "End-Point Detection & Response" or EDR solutions, or more broadly, XDR solutions for "Extended Detection and Response". These are SaaS tools that provide comprehensive, optimized security by integrating security products into simplified solutions (correlated incidents, analytics, automated detection and response, AI and machine learning, and automatic correction of affected resources).

The third attack targets the physical machines in infrastructures and clouds: these are the servers that encode and store data. How much confidence can I really have in an IT department, a service provider or my outsourcer or cloud provider, especially if they are subject to the extraterritorial laws of their country? Is the data hosted on my server, which potentially accesses all my data, well protected from the outside?

The fourth attack involves internal and external networks. How can network requests be controlled when they can potentially come from anywhere? One solution is Zero Trust Network Access or ZTNA, which can be cleverly combined with VPN solutions.

Finally, the fifth attack is on the data itself. Do I really trust the private companies that run the intercontinental fibers? Do I trust the outsourcer who holds the keys to all my data? Do I trust my "honest but inquisitive" system administrator? Are the information superhighways being listened in on by "five eyes" or by Chinese 5G routers?

This is where PARSEC comes in, encapsulating documents in attack-proof cryptographic envelopes.

The Zero Trust strategy of the US Department of Defense is a good illustration of this global issue, with its seven pillars.

The 7 pillars of zero trust

2) Data Zero Trust. 

Having identified the risks, we are now focusing exclusively on the issue of cyber data protection.

PARSEC has made a fundamental strategic choice to protect data: to use signature and encryption everywhere and on every share or exchange between PARSEC clients (human actors) and servers (non-human actors).

Each user generates and uses his own encryption keys, and each device generates and uses its own signature keys:

  • Data passing between two devices is systematically encrypted from end to end.
  • Data leaving a device is systematically signed.
  • Data between client and server are systematically signed

PROBLEM No. 1: How to distribute keys easily? 

Signing and encryption have been around for 30 years, and since the invention of PGP, the standard solution used by everyone is to use X509 certificates issued by a certification authority. The difficulty with this concept is not encryption per se, but key transfer, because the average user doesn't have the ability to understand what he's doing when he manages keys and key transfers. When he fetches his interlocutor's public key from the PGP key server, how does he really know it's the right person's public key? In fact, he doesn't even know that it's the right person's public key, and yet he still retrieves the key and uses it. 

We did try to find a solution with the "web of trust", which is supposed to create a network of trust by having public keys signed by third parties, but it failed because we reached the limit of human understanding: it's too complicated, so nobody uses it. Theoretically it's viable, but in practice it's unusable by the average person.

Our response to this problem has been to integrate the trust mechanism into the application itself to solve this complexity problem. This is why PARSEC embeds a dedicated PKI. The principle is very similar to that of a standard PKI system, with enrolment processes at the moment of token delivery and signature. But here, we no longer go through the central PKI: we integrate the PKI into the application: it's a "turnkey PKI".

PROBLEM #2: How do you build initial trust when the enrolled user is unknown?

We have chosen to integrate an enrolment mechanism involving a cross-exchange of secret tokens. This is the SAS mechanism for "Short Authentication String".

From the user's point of view, it looks like this:

The basic principle is that only humans can create trust, and crypto is the guarantee of this. It's radically the same as opening an account on a remote web server.

PROBLEM No. 3: Once you've got the keys, how do you validate the data in Zero Trust mode? In other words, always check and never trust.

Parsec's zero trust & zero knowledge system operates on two security levels:

  • client-side cryptographic security that manages user enrolment and controls the decryption and verification of cryptographic signatures for documents in a given enclave;
  • server-side security, which manages the rights of authorized users on the enclave.

In this radically new architecture, the server fulfills a "simple" role of routing encrypted and signed packets.

Every time an operation is carried out, it is signed, so it can be systematically verified. 

This check is carried out :

  • in terms of server access rights
  • end-to-end on the customer 

PROBLEM No. 4: What is Data, and at what granular level should we work?

We generally work at the macroscopic level of the document or file. This is the case when you send a PGP-encrypted e-mail attachment. This is practical, but it's becoming less and less appropriate as the number of exchanges increases, making it increasingly complex to manage: you have to encrypt, decrypt, store locally, synchronize and finally share.

The reality is that systems are becoming more and more integrated, and manipulating richer and richer data. To share data with others, we generally send a link to a data stream. The downside is that this is not as secure as an encrypted attachment. 

The finiteness of data and its physical representation disappear completely because modern web tools provide advanced functionalities. The document thus disappears in favor of the notion of service, i.e. technically, the file is replaced by a data stream accessible via a URL link.

How do you secure your data?

As a result, there is a growing need for integration between the security layer and the application layer. 

The answer to this problem appeared to us in securing (by design) the entire solution and resolutely tackling the issue at the level of flows and not documents. Cryptographic security is provided by the application itself.

With these problems solved, we are now faced with some very important challenges, most of which we have solved in five years of hard work.

Challenge #1: Backward compatibility on data models.

This challenge is a very good example of a technical lock, because the end-to-end encryption problem exacerbates the problem of upward data compatibility.

In a classic web application, the process is straightforward: an update to functionality leads to an evolution of the data model, which is trivial on a centralized database.

But in an approach that no longer trusts the central server, how can you add a new backward-compatible function (such as data classification in DCS) when all the data is signed and encrypted and the central server sees nothing?

Challenge 2: Integrate security into the application.

We can illustrate this challenge with a simple example: how do you share very large files?

Uploading a 1 MB file atomically is indeed easy to do; but uploading a 1 GB file is impossible, hence the choice of data block-level atomicity.

But when the server receives blocks, what guarantee is there that it will receive the other blocks or the encrypted table that reconciles the entire document? Hence the idea of solving the atomic management of a large document from the user's point of view. 

Challenge no. 3: how do you manage decorrelation between client and server systems? 

DEFINITION: Eventual consistency is the guarantee that when an update is made in a distributed database, this update will ultimately be reflected in all nodes storing the data, resulting in the same response each time the data is queried.

A classic web system is made up of a "dumb" web browser client or follower that only displays, and an intelligent server that manipulates the data.

But in the PARSEC model, we have to deal with the server's lack of intelligence and defer the processing of encrypted data to the client. The consequence is that you have to be able to work with uncorrelated systems, and that "in the end" they have to agree.

Since the client system moves as fast as the user, at some point there's bound to be a decorrelation between client and server states that needs to be managed.

Whereas in the case of traditional web applications, you have to talk to the server for every operation, with PARSEC you'll be working locally, so you'll be able to work more quickly depending on the use case, while making do with network outages or low bandwidth, because the correlation is made a posteriori by design.

The future: integration of the Data Centric Security principle and security classification management.

The general idea behind DCS is to connect XML metadata to the file or document, providing it with classification, signature and encryption functionalities in line with NATO standards: these are Stanag 4774 (Label Syntax) and 4778 (Labeling Profiles).

It's a vision with consequences:

  • 1) As the zero trust must accept different protocols from one partner to another, a pivotal format is needed to guarantee interoperability between organizations via standard formats.
  • 2) The DCS must include metadata that will be characteristic of usage, hence data standardization and pivotal data formats for data exchange between organizations. 

The more the application passes under the user's control, the more we're going to have compatibility problems with specific formatting depending on the organization. 

The proposed architecture enables data flow exchanges within a French organization.

The proposed inter-nations architecture includes two interfaces:

  • a Parsec-to-DCS gateway for exporting documents in DCS format.
  • A dedicated Parsec PKI identity server (auto-enrolment) interconnected with the internal or partner organization's PKI (enrolment delegated to the central PKI).

A central issue remains that of classifying the sharing of data flows as flows, since the NATO DCS standard is document-oriented, not flow-oriented.

In this case, the solution is to adopt a single flow-oriented application solution at nation level.

Conclusion

The PARSEC system is natively capable of supporting compatibility with Data Centric Security DCS and even beyond, enabling much more advanced functionalities to be managed between the various players using PARSEC. 

For example, it will be possible to label flows, which is not possible with the DCS pivot for single documents.

By PARSEC

In the same category

Optimize Rust build & test for CI

Optimize Rust build & test for CI

Last year, we migrated our CI to GitHub Actions after previously using Azure Pipelines. We took advantage of the migration to improve our CI. This article will summarize the different steps we have taken to enhance our CI when working with Rust. Parallelize Run...