index - CIDRE - Confidentialité, Intégrité, Disponibilité et REpartition Access content directly

CIDRE

Confidentialité, Intégrité, Disponibilité et REpartition

 

 

PRESENTATION :

 

For many aspects of our everyday life, we rely heavily on informations systems, many of which  are based on massively networked devices that support a population of  interacting and cooperating entities. While these information systems become increasingly open and complex, accidental and intentional failures get considerably more frequent and severe.

The CIDRE project consider three complementary levels of study: the Node Level, the Group Level, and the Open Network Level:

   - Node Level. In the context of that proposal, the term node either refers to a device that hosts a network service or to the process that runs this service. Node security management must be the focus of a particular attention, since from the user point of view, security of his own devices is crucial. Sensible information and services must therefore be locally protected against various forms of attacks. This protection may take a dual form, namely prevention and detection.

   - Group Level. Distributed applications often rely on the identification of sets of interacting entities. These subsets are either called  groups, clusters, collections, neighborhoods, spheres, or communities according to the criteria that define the membership. The adopted criteria may for instance reflect the fact that its members are administrated by a unique person, that they share a same security policy, that they are located in closed physical places, that they need to be strongly synchronized, that they cooperate together, or that they share mutual interests. Due to the vast number of possible contexts and terminologies, we refer within this document to a single type of set of entities, that we  call set of nodes. We assume that a node can locally and independently identify a set of nodes and modify the composition of this set at any time.  The node that manages one set has to know the identity of each of its members and should be able to communicate directly with them without relying on a third party. Despite these two restrictions, this definition remains general enough to include as particular cases most of the examples mentioned above. Of course, more restrictive behaviors can be specified by adding other constraints. For example, if we consider the concept of group (and its associated group communication services, a group is firstly a set of nodes for which stronger properties have to be ensured: in particular, the existence of a  group is known by all its members and the evolution of its membership is observed in a consistent way by all of them.  We are convinced that security can benefit from the existence and the identification of sets of nodes of limited size as they can help improving the efficiency of the detection and prevention mechanisms.

   - Open Network Level.  In the context of large-scale distributed and dynamic systems, interaction  with unknown entities becomes an unavoidable habit despite the induced risk.  For instance,  consider  a mobile user  that connects his or her laptop to a public Wifi access point to interact  with his company.  At this point, data (regardless it is valuable or not) is updated and managed through non trusted undedicated entities (communication infrastructure and nodes) that provide multiple  services to multiple parties during that user connection. In the same way,  the same device (e.g., laptop, PDA, USB key) is often used  for both professional and private activities, each activity accessing and manipulating decisive data.

 

 

ACTIVITIES/OBJECTIVES & RESEARCH TOPICS :

 

To study new security solutions for each level (nodes, set of nodes and open network levels) one must take into account that it is now a necessity to interact with devices whose owners are unknown.  To reduce the risk to rely on dishonest entities, a trust mechanism is an essential prevention tool that aims at measuring the capacity of a remote node to provide a correct service. Such a mechanism should allow to overcome ill-founded suspicions and to be aware of established misbehaviors. To identify such misbehaviors, intrusion detection systems are necessary. Finally, Privacy Protection is a basic user right that must be respected even in presence of tools whose goal is to control users actions or behaviors. The CIDRE project will thus focus on these three different aspects of security: trust, intrusion detection, and privacy (and their potential interactions):

 

   - Trust. While the distributed computing community relies on the trustworthiness of its algorithms to ensure systems availability,  the security community historically  makes the hypothesis of a Trusted Computing Base (TCB) that contains the security mechanisms (such as access control, cryptography, etc.) that implement the security policy.  Unfortunately, as information systems get increasingly complex and open, the TCB management may itself get very complex, dynamic and error-prone.

From our point of view, an appealing approach is to distribute and manage the TCB on each node and to leverage the trustworthiness of the distributed algorithms in order to strengthen each node's TCB.  Accordingly, the CIDRE project proposes to study automated trust management systems at all the three identified levels:
   . at the node level, such a system should allow each node to evaluate by itself the trustworthiness of its peers and to self-configure the security mechanisms it implements;
   . at the group level, such a system could rely on existing trust relations with other nodes of the group to enhance the significance and the reliability of the gathered information; 
   . at the open network level, such a system may rely on reputation mechanisms to estimate the trustworthiness of the peers the node interacts with. The system may also benefit from the information provided by  a priori trusted peers that, for instance, belong to the same group (see previous item).

For the last two items, the automated trust management system will de facto follow the distributed computing approach. As such, emphasis will be put on the trustworthiness of the designed distributed algorithms.  Thus, the proposed approaches will provide both the adequate security mechanisms and a trustworthy distributed way of managing them.

 

   - Intrusion Detection. Exploiting vulnerabilities in operating systems, applications, or network services, an attacker can defeat the preventive security mechanisms and violate the security policy of the whole system. The goal of the intrusion detection systems is to be able to detect, by analyzing some data generated on a monitored system, such violations of the security policy.

Two main approaches coexist to detect intrusions: The misuse approach, and the anomaly approach. On one hand, the misuse approach consists in detecting previously known forms of intrusion defined by signatures that attacks leave in the analyzed data. This approach is of course not able to detect unknown attacks and a perfect accuracy would require a perfect knowledge of all the attack scenarios. On the other hand, the anomaly based intrusion detection consists in detecting a deviation of the observed behavior  of the monitored system from a reference of the normal behavior built in a previous step: when a difference occurs, it is considered as the symptom of an intrusion.

From our point of view, while useful in practice, misuse detection is intrinsically limited. Indeed, it requires to update the signatures  database in real-time similarly to what has to be done for antivirus tools.  This approach appears insufficient to us, since there are still thousands of machines that are victims of malware. This problem is emphasized as malware expansion is now quicker than ever, limiting the capabilities of human intervention and response. As an illustration, the Slammer virus infected most of the MS-SQL servers in the world, i.e., more than 100.000 machines, in only a few minutes.

In our work, we will focus on the anomaly approach. We propose to study two complementary methods:
   . Illegal Flow Detection: This first method intends to detect information flows that violate the security policy. Our goal is here to detect information flows in the monitored system that are allowed by the access control mechanism, but are illegal from the security policy point of view.
   . Data Corruption Detection: This second method intends to detect intrusions that target applications and make them execute illegal actions by using these applications in an incorrect way. This approach complements the previous one in the sense that the incorrect use of the application can possibly be legal from the point of view of the information flows and access control mechanisms, but is incorrect considering the security policy.

In both approaches, the access control mechanisms or the monitored applications can be either configured and executed on a single node, or distributed on a network of machines. Thus, the approaches must be studied at least at the first two levels (nodes and sets of nodes) defined in this proposal.

 

   - Privacy. In our world of ubiquitous technologies, each individual constantly leaves digital traces related to his activities and interests which can be linked to his identity. In forthcoming years, the protection of privacy is one of the greatest challenge that lies ahead and also an important condition for the development of the Information Society. Moreover, due to legality and confidentiality issues, problematics linked to privacy emerge naturally for applications working on sensitive data, such as medical records of patients or proprietary datasets of enterprises.  Privacy Enhancing Technologies (PETs) are generally designed to respect both the principles of data minimization (the data minimization principle states that only the information necessary to complete a particular application should be disclosed and no more). This principle is a direct application of the legitimacy criteria defined by  the European data protection directive and data sovereignty (the data sovereignty principle states that data related to an individual belong to him and that he should stay in control of how this data is used and for which purpose). This principle can be seen as an extension of many national legislations on medical data that consider that a patient record belongs to the patient, and not to the doctors that create or update it, nor to the hospital that stores it.  In the CIDRE team, we will investigate PETs that operate at the three different levels (node, set of nodes or open distributed system) and are generally based on a mix of different foundations such as cryptographic techniques, security policies and access control mechanisms just to name a few. Examples of domains where privacy and utility aspects collide and that will be studied within the context of CIDRE include: identity and privacy, geo-privacy, distributed computing and privacy, privacy-preserving data mining and privacy issues in social networks.

Nombre de documents

Chargement de la page

Nombre de notices

Chargement de la page

Répartition des dépôts par type de documents

Chargement de la page

 

Derniers dépôts

Chargement de la page