Information Assurance Scenario Canonicalization

This is research project proposal that I hope to turn into a masters or doctoral thesis.

Problem

Understanding the threat spectrum when designing security policies to govern how businesses should share and use information by means of information and communication technologies (ICT) is a complex process. Every company in the world that uses ICTs as a means to conduct business needs some form of an information assurance program that orients proper handling of shared information from creation to destruction. Information is dependent on data, and both data and information can be used improperly to put any business at risk of damaging its customers or itself.

Internet-based social media platforms, in particular, have made it so easy to share information that their effectiveness in the business environment decreases time and money spent while increasing connectivity to a global audience. But the opportunities and risks of using social media platforms are not holistically clear. The mediums that store, transfer, and communicate the information to us dramatically affect our perceived consequences. All organizations must have a way of thoroughly understanding the risks involved with the evolution, emergence and integration of technologies that have the capability of distributing data and information.

Hypothesis

By using a multidisciplinary approach to canonicalize information sharing scenarios for a range of public sector and private sector organizations, a scalable framework can be developed in order to quantify risk and opportunity involved with the use of ICTs, with a focus on Internet-based social media platforms.

Similar work

  • Scenario planning

Mats Lindgren and Hans Bandhold, authors of Scenario Planning: The link between future and strategy, illustrate many process models that can be adapted to better understand the relationships between information. By using these models in various applications, the organization of the causes and effects of data, information, uses, and mediums will be defined clearly and effectively.

  • Philosophy of information

Dr. Luciano Floridi, author of Information – A Very Short Introduction, describes the implications of biological information. In application to information assurance, this conceptual analysis will allow for the development of specific information models that will help illustrate the security implications of humans and technology as information storing and sharing processors.

  • Information assurance

The United States Chief Information Officers Council, in a document entitled Guidelines for Secure Use of Social Media by Federal Departments and Agencies, outlines a model developed by Dr. Mark Drapeau and Dr. Linton Wells that describes the four functions of social software. However the current state of ICT relies heavily on visual and auditory stimulus. An expansion of this social-media model must include an analysis of the other three information receptors: touch, taste, and smell. This expansion must occur to develop scenarios that take into consideration the future trends of virtual reality and a deeper integration into a human-developed infosphere.

Proposed outcomes

  • Goal #1

This phase of the project entails graphical modeling of a wide range of information sharing scenarios utilizing ICTs. The scope of the information sharing scenarios will begin with Internet-based social media platforms and will expand to include various forms of telecommunication services. It is necessary to incorporate a comprehensive selection of scenarios in order to compile a large knowledge base for Goal #2. The knowledge base will be organized systematically according the complete life cycle of information processing concerning data, information, information stakeholders, and information transport mediums.

  • Goal #2

Using the knowledge base established in Goal #1, a critical analysis must take place utilizing Dr. Floridi’s work concerning the philosophy of information. This analysis should include applied concepts such as the information as, for and about reality. A better understanding of the relationships between people, ICTs, and a combination of people and ICTs (dependent on origin and destination) can be quantified in direct relation to our perception of the any given ICTs interface. Further research regarding human perceptions of ICTs can be applied using Dr. Sherry Turkle’s research in psychoanalysis and culture in relation to people’s relationship with technology. This exploration will expand the knowledge base for Goal #3.

  • Goal #3

I presume that following Goal #2, commonalities among ICT interfaces will become evident. This presumed manifestation should allow for the expanse of Dr. Mark Drapeau and Dr. Linton Wells’ four functions of social software model. This expanded model should be able to visually depict a more precise yet comprehensive representation of the utilization of ICTs. This representation will be able to quantify human-centric information control feasibility, impact, and residual risk depending on the source and destination of complete life cycle information dissemination.

  • Project Objective

The final phase of this project will include the development of system development life cycle processes to assist public sector and private sector organizations with establishing more coherent information assurance programs.

Advertisements

Understanding Firewall Technologies

Introduction

Firewalls, however unfortunately, are an essential part of connecting to the Internet. The devices that you use to connect to the Internet use complicated operating systems which are prone to security risks due to the nature of software engineering. Because of the consistent weaknesses in software on your personal computer and hand-held devices, installing firewalls is an inherently reactionary security measure–no amount of cryptography is going to completely protect you against buggy software.

In order to minimize risk and protect yourself from the potential threats that exist beyond your home/office local area network, it’s wise to implement, at the very least, a basic stand-alone firewall (such as a router). Firewalls are designed to monitor and/or prevent network intrusions and are programmed with much less code, therefore having a (proven) lower probability that they contain bugs/security holes.

One of the greatest things to happen to the Internet is the popularity of wireless (802.11 a/b/g/n) devices. You may be skeptical because of the security risks that are inherent with unsecured wireless networks. But what this increase disbursement of wireless routers did was it directly, however unintentionally, put a hardware (stand-alone) firewall in front of millions, if not billions of home networks.

There are many different technologies used in various firewalls: packet filter, stateful, application proxy, unified threat management (UTM), intrusion detection and/or protection system (IDPS), and network address translation (NAT). There are big differences when it comes to the performance of the different types of firewalls; however, as a typical home user you will not notice the limitations of throughput.

Before we jump into the various firewall technologies, you should understand the difference between an appliance-based firewall and a server-based firewall. A typical Linksys home-network router is an appliance firewall because the hardware was designed around the needs of the firewalls software. There are exceptions of course, which include third-party firewall operating systems, such as DD-WRT, Open-WRT or Tomato. But using these operating systems in appliance-based firewalls does not make them server-based firewalls because they are static, unchangeable units. Server-based firewalls can be changed to adapt to the necessary requirements of any given local area network. Server-based firewalls include x86/64 computers that Linux-based firewalls can be installed to via CD, DVD, USB, or PXE.

Packet Filter

Packet filtering is the oldest and the most basic firewall technology. All firewalls have some level of packet filtering. Packet filtering simply allows or denies individual packets based on a set of rules–a set of rules that manages the inspection of the information in the packets header, such as the packets source or destination address, protocol, and/or port number. Packet filtering does not inspect the payload; nor does it monitor the sessions, which makes them vulnerable to spoofing attacks. Packet filtering works on layers 1, 2 and 3 of the OSI model making packet filter technology very efficient.

Stateful Packet Inspection (SPI)

Stateful firewalls are built into any modern firewall system. To be a “stateful” firewall, the “state” of all TCP sessions are monitored including the sequence numbers in packet headers. After the session has ended, the session-table is discarded. Stateful firewalls also do not monitor the payload of data packets. Stateful firewalls differ based on firewall vendor because with UDP and ICMP traffic, for example, there are no packet “states” for the firewall to monitor, unlike a classic TCP protocol where there is a well defined start and end of any given session. Connectionless “sessions” can be monitored, but the end of a session is ended via timeout.

SPI Examples

Appliance-based stateful firewalls include any typical home/small office router or wireless access point. Server-based stateful firewall operating systems include:

Some of these server-based stateful firewall distributions support basic intrusion detection and prevention system technologies (keep reading…).

NOTE: The reason why people like to change their appliance-based operating system from the default OS found in most routers, such as those by Linksys, is because the default operating systems are tailored to home users that typically do not know enough about firewall and/or routing systems to modify them. It would cost router vendors more money to increase the complexity of these firewall operating systems, not to mention the probable increase in tech support. By “upgrading” an appliance-based routers firmware with third-party firmware, such as DD-WRT, advanced users can have access to better router/firewall controls.

Application Proxy

Application-proxy firewalls are the most “in depth” and most secure firewall technology for specific network applications because these firewalls are the middle man between all communications across all seven layers of the OSI model. It is most commonly used in simple Web hosting or (non-time-sensitive) e-mail service environments, and are not used in high-bandwidth intensive environments (such as Web file servers). Each protocol that needs to be monitored and controlled requires a unique proxy application module, increasing the need for computation resources. Being bandwidth-sensitive, due to the dependency on computation resources, application proxy firewalls are susceptible to denial of service attacks. The advantages of an application proxy firewall over a packet filter firewall or a stateful firewall include advanced security monitoring functions. Application proxy firewalls can authenticate users directly, examine the payload of data packets and make decisions based on the payloads. Application proxy firewalls can also be deployed in redundant configurations and/or clusters.

Application Proxy Examples:

  • ($$) Microsoft’s ISA server, a server-based firewall, which can run in server-core which is highly secure and less taxing on the servers limited resources. The best use of Microsoft ISA server is within the local area network and not at the network perimeter. (software based)
  • ($$) Fortinet Web Application Firewall (hardware based)
  • ($$) McAfee Firewall Enterprise (hardware based)
  • (free) Zorp GPL is an less comprehensive application proxy that can be installed onto a *nix operating system by an advanced user. (software based)

Unified Threat Management (UTM)

UTM firewalls combine several firewall technologies, including stateful, intrusion detection and prevention, anti -virus, -spyware, -fishing, -adware, -spam and web content filtering. UTMs are also used primarily in low-throughput intensive environments, with low-user counts. UTMs are not limited to low-throughput networks however, because server-based firewalls are only limited by how much money you can put into its hardware. The IPS capabilities in UTM firewalls are typically subsets of full blown IPS features, meaning they only support protection for a small amount of protocols. Anti-virus functionality is generally limited to HTTP, SMTP, and POP3 protocols only.

UTM Examples:

Intrusion Detection and Prevention System (IDPS, IDS, IPS)

Intrusion detection systems (IDS) only monitor. Typically, IDS are used in conjunction with intrusion prevent systems (IPS) by monitoring and logging network traffic. This logged information is then shared with various IPS, both network-based and host-based.

(Internet) –> (IDS) –> (Firewall) –> (IPS) –> (Network/Servers/Hosts)

In this above scenario, the IDS is able to monitor all traffic that enters and leaves the network. This is important because log analysis is crucial for proper care of a business environment’s network. The information that the IDS collects can be used to anticipate (IPS) incoming traffic. Having a leaner SPI firewall in front of the IPS decreases the amount of IPS processing so the IPS can have maximum resources available to tackle more complex traffic.

IDPS are commonly associated to network-based devices, meaning they are appliance- and server-based devices that support the network. IDPS can also support, monitor and protect the hosts on the network in the form of software. Host-based intrusion detection and prevention systems (HIDS/HIPS) also support the NIDS/NIPS by providing the complete IDPS with up-to-date information with needs and activity of the hosts on a network.

IDPS are different from UTMs because IDPS are much more feature-rich in terms of capability. UTMs support only a couple hundred signatures and only a dozen or so protocols, where as a full IDPS will utilize several thousand signatures and over 40 protocols. Of course this is dependent on the vendor and/or product. IDPS are capable of managing their own rule sets by “learning” and can update themselves either by downloading new content or sharing information with other IDPS on the network. Stand alone appliance-based IDPS can also support up to multi-gigabit speeds.

HIDS such as OSSEC (see below) are important to businesses that have to be PCI compliant because they monitor extremely detailed aspects of hosts. This information that OSSEC monitors is stored centrally on a local server for system administrators.

IDPS Examples:

Read this excellent article: Three Open Source IDS/IPS Engines: The Setup

Network Intrusion Prevention Systems (NIPS)

Host Intrusion Prevention Systems (HIPS)

Network Intrusion Detection Systems (NIDS)

Host Intrusion Detection Systems (HIDS)

  • (free) OSSEC (SANS Institute InfoSec Reading Room: Using OSSEC with NETinVM [pdf])
  • (free) Samhain

NOTE: Cisco, Juniper, and Check Point are the largest suppliers of business-class firewall devices. Be sure to do your research and to ask questions when shopping for security solutions. ICSA Labs is always a good place to start.

Additional resources: