Section 1: Foundation of the Study
The majority of business information is stored electronically, creating the need for a system of securing this information. As the systems providing computer and internet security becomes more and more complex, security is increasingly compromised. Interlopers and hackers continue to create new and more creative means of gaining entry to information stored on networks, and users are facing an increased number of security attacks. Surprisingly, despite the variety and amount of security risks, security tools designed to counter these are slow to develop appear for business consumption (Garcia-Teodoro et al, 2009). With the introduction of new technological trends, information technology (IT) systems have an even greater need to be revamped and upgraded.
Businesses face multiple security risks as confidential and high-risk information is often stored within their computers and remote networks. While this method of storage improves efficiency and reduces storage costs, it also raises risks related to reliability, security, and loss of information. With business information and IT resources existing remotely, businesses are increasingly concerned about their vulnerability to malicious attack (Leavitt, 2009). To address the needs of a business venture, security programs need to provide a realistic course of action to establish network security conditions, define security policies, and implement protection procedures along with sub-network policies. From this point, network security elements can be strategized to provide the most effective security for an organization’s purposes.
Background of the Problem
Internet security, network protection, and general cyber security are among the fastest-growing fields in the industry of computer technology (Rountree, 2013). Computer and network security systems protect systems from viruses, Trojans and worms by establishing protocols. Good security protects company assets and information, gives a company competitive advantage, and ensures compliance with fiduciary and legal requirements (Canavan, 2001). Research indicates that business owners generally feel insecure about the safety, security, and confidentiality of their business networks (Subashini & Kavitha, 2011). Kim and Park (2013) report nearly 19,000 cases of malicious code and hacking incidents in business enterprises during 2012 most were preventable if proper DNS control access had been used.
Computer security requires carrying out all four of the following categories at all times; controlling access to a computer system, to the resources protected by the system, protecting data in transit between systems, and securing applications against malignant input (Gollmann, 2010). Intrusion detection and prevention systems, using firewalls; creating a network security policy; implementing encryption, and monitoring system protections against viruses, worms and Trojans all solve many problems related to security. Implementing a network backup and an accountability system can avoid many problems with data retrieval and finding causes after a cyber attack (Jajodia, 2010).
The past two decades have seen many strategies developed and new strategies are often initiated by changes in the Internet. According to Magee and Thompson (2010) businesses with a standalone network are at greater risk for security breaches and attacks than businesses with interconnected networks. Their finding seems counter-intuitive, but businesses with standalone networks over-estimate the security measures of their network and updates are not rigorously applied (Magee & Thompson, 2010). Security threats continue to increase in complexity, and developing flexible, adaptive, and forward-thinking security approaches is not easy (Garcia-Teodoro et al, 2009). Hackers rely on human error and inattention to updates and passwords to infiltrate systems. Wireless sensor networks (WSN) are typically used in unattended environments so user authentication is a primary security concern (Das, 2009). The above reasons demonstrate that a company protocol to establish routine security measures is essential.
Hackers are targeting small business and the number and type of attacks are on the rise from one day to the next. The FBI Readiness Team averages a total of 3,000 servers hacked regularly and resulting in the irretrievable loss on average of between $100,000 and $200,000 per victim. Hacker’s first choice is small business (Brow, 2012). Cybercrime is costing the small business as much as $140 billion yearly for support to stop malware and vulnerability’s attacks (Shain, 2012). Victimization by cyber crimes can lead to job loss or layoff due to resources being stolen by hackers (Kendrick, 2010).
The research question can help identify the strategies used by small businesses that weaken their computer security instead of making security that protects the businesses’ computers. The hypothesis the research is based upon can be stated as: If small businesses are experiencing computer attacks, then security protocol is not being followed properly. The hypothesis leads to the overarching research question is ‘What security policies and organizational strategies will small businesses follow to defend organizational networks against cyber-attacks?’
The purpose of the research is to evaluate small business security policies and organizational strategies that are used to protect the organization against cyber-attacks. A minimum of 30 security managers across various industries in the Northern Virginia and Washington DC area provide the data. Individuals with over fifteen years’ experience in network security are the participants. The research is designed to provide knowledge about origins of organizational cyber threats. The results from this study can be used to provide education and training for security network administrators on appropriate measures for providing security. The results show the amount of awareness of business owners and managers on the importance of implementing network security systems and making continuous security system improvements (Shackelford, 2014).
Nature of the Study
Qualitative research describes the natural occurrence of social phenomena (Hancock, 1998). Qualitative methods offer the “human experience” that quantitative methods cannot provide (Silverman, 2013, 7). An inductive method is integral to qualitative research, instead of the deductive methods used in quantitative research (Hancock, 1998). Subjective information from the experience of individuals with an expert or first-hand knowledge of the issue under study is required for qualitative research. Collecting data takes a long time and can require carrying out interviews and transcribing the results. Input from surveys or narratives from a group or sub-group of participants are used to carry out qualitative research (Hancock, 1998). The descriptive answers provided by questionnaires or interview question require a content analysis (Patton, 2002, 110).
Phenomenology is appropriate to the problem addressed in the research because the study requires the input of a group or sub-group of particular individuals and their experience. The study of phenomena covers “events, situations, experiences or concepts” (Hancock, 1998, 4). If only one individual provides all the necessary information for the research then, the study is designed as a narrative (Patton, 2002, 57). Obtaining a solid understanding of phenomena can provide insights that lead to problem solving. Phenomena cannot be controlled like the variables in a quantitative experiment, so designing an adaptation framework is necessary. The framework needs to adapt to changes that are not predicted. The study of phenomena in daily events requires adapting to unexpected events and to change. Patton (2002, 57) explains that a qualitative study “offers a fluid sense of development, movement, and change.”
Large corporate houses, banks, financial institutions, security establishments are favorite targets for cyber-security hackers. Effective security policies based cyber-security tools will protect against organizational cyber-attacks. The primary research question for this study is:
R1. What security policies and organizational strategies will small businesses follow to defend organizational networks against cyber-attacks?
The following are the interview questions to be addressed by small business owners and managers in Washington, DC who will be participants in the study:
- What strategies are used to monitor your network traffic?
- What strategies are used to update your firewall and patching to repel hackers?
- How often do you conduct vulnerability assessments of the computers in your business? How frequent are overall vulnerability assessments applied to your network(s)?
- Are you checking your computer’s heath status before and after your shift to ensure the status is maintained above is staying above 90%?
- What are your standard operating procedures and strategies used to conduct open port scans checking for weakness on your network(s)?
- What security method are you using to conduct a penetration test in search for high risk vulnerabilities?
- Do you identify higher-risk vulnerabilities resulting from a combination of lower-risk vulnerabilities?
- How often are you testing the ability of your network protection in order to successfully detect and respond to attacks? g up you
- How is security maintained after working hours end?
The conceptual framework for this study is based on Adaptation Framework. This will identify all possible vulnerabilities and address mitigation strategies. This type of conceptual framework considers practical issues, addresses interoperation between technologies, and distills existing knowledge and systems for adaptation into easily understandable information. Figure 1 shows the interplay between the super user, other end users, the application, and the adaptation policy.
An adaptation framework follows five aspects of change: changes to the model’s behavior, changes to the model’s state, changes to the execution of the model, asynchrony of change, and implementation probes (Oreizy et al., 2008). The five aspects cover all the elements necessary for change in the businesses with participants who are interviewed in the study.
An adaptation framework allows the observations obtained through the interview questions to be applied to possible models for change based on the necessary changes for best security practices. (See fig. 1) The adaptation network allows the researcher a way to identify elements in need of change, identify the communication components within the network, and manage the internal state of defective components (Oreizy et al., 2008).
Definition of Terms
Cole (2011) defines vulnerability as the weaknesses in the network system or design that is prone to exploitation by threats. Vulnerabilities are sometimes found in the internet protocols as insecurity in the TCP/IP. Vulnerabilities can also be found in applications, operating systems, or written security policies. Some of the top vulnerabilities that businesses should pay attention to include thumb or zip drives, USB drives, wireless access points, optical media, smart phones and portable devices, and email communication.
A threat is defined as a potential danger to a company’s assets. In regards to network security, the threat is a potential danger to the network or entrusted databases (Probst, 2010). Threats are recognized through identifying potential vulnerabilities such as access points, passwords, and authentication errors. The entity which takes advantage of vulnerabilities, such as the software implemented by a hacker, is referred to as a threat agent or threat vector. In addition to hackers, threats can come in the form of inside threats, cyber-attacks, and cloud storage attacks.
Risk Talabis and Martin (2013) define risk as the likelihood of a specific threat attacking and exploiting a particular weak point in the network. A risk does not necessarily mean that the system is currently at risk of penetration, it simply signifies that the possibility exists. The more vulnerabilities in a system, the greater the risk for threats and attacks. Risk management strategies such as cyber liability insurance can help to mitigate the effects of attacks and lawsuits as a result of security negligence.
Network security liabilities are defined as the actual or alleged incident of the transmission of malicious code or unauthorized access to the network (Manshaei et al, 2011). This can include both company damages and losses as well as security breaches affecting consumer or organization information.
A sequence of commands or a piece of software taking advantage of a vulnerability in order to cause damage is referred to as an exploit (Cole, 2011). Exploits are commonly categorized by the type of vulnerabilities they target and the effects of running the exploit in the computer network. When all exploits are eliminated and repaired, a computer network system is deemed secure. Exploits can include web security exploits, malware, denial-of-service (DoS) attacks, cyber-crime, and injection exploits, among others.
Iosifides (2011) defines a countermeasure as a safeguarding strategy designed to mitigate potential risks by eliminating or reducing the vulnerability rather than the threat. A countermeasure can also be called security control. By creating mechanisms to minimize vulnerability, the possibility of exploit is greatly reduced.
Probst (2010) defines an asset as any information owned by a company which has value to said company. In terms of network security, assets can be hardware, software, data, and any knowledge that contributes to the efficient running of the organization. Assets should be protected from illicit use or access to protect organizations from losses and theft.
Controls are measures installed in order deter insecurities and exploits (Liu et al., 2009). Controls act to reduce the risk that vulnerabilities can be exploited by threats. A control is similar to a countermeasure but is more detailed to focus on a particular task rather than the system as a whole. There are three types of controls: preventive controls are used to prevent incidents from occurring; detective controls identify incidents and alert IT to the intruder; and corrective controls limit the extent of damage by recovering operations to normal status as quickly and efficiently as possible (Liu et al., 2009).
Lewko et al. (2010) define encryption as the process of converting data from a readable format, or plain text, to an unreadable format, or cipher text. Encrypted data can only be changed to plain text by the sender and the specified receiver, therefore the information is protected from any potential threats.
Decryption, being the opposite of encryption, is the process of converting cipher text back to plain text (Lewko et al., 2010). The key used for encryption is the same as the one used for decryption.
Malware is malicious content in the form of software which negatively affects the security and functioning of a network (Yin & Song, 2013). This software disrupts and disables normal functioning of the network, therefore creating an opportunity to gather information and cause harm within a network. Malware is typically introduced in the form of a code that acts upon entry to the network.
The research methodology assumes that the content of the participants’ responses can be compared and contrasted. The assumption that the participants act within certain structures and constraints is based on ontological (reality) features of qualitative research. The structure and constraints are the parameters of their activities related to the research topic. The assumption based on epistemology is that the participants’ knowledge of the topic falls within the same range of aims and perspectives (Hathaway, 1995).
Significance of the Study
Contribution to Business Practice
The world is more connected through the internet now than it ever has been, and it is likely that this interconnectedness will continue to grow and prosper with the constant increase of internet technology. The study of the security of business information is crucial because of the high prevalence of cybercrime and information hacking. In order for business to take network security seriously, security must recognize and accommodate business needs and problems and create a clear benefit for businesses (Howard & Lipner, 2009). As world economy and infrastructure becomes increasingly located off-site, the information and money being stored in databases are prone to extortion, theft, manipulation, identity theft, and numerous other malicious activities.
In the early 1990s, the advent of the internet in corporate business use created a need for security policies for remote transactions. With the introduction of security challenges such as worms and viruses, policies and protocol were created and tested to protect sensitive data in transit (Gollman, 2010). While these basic security threats have long since been addressed, threats have become increasingly creative in their exploit approaches. Currently, remote and cloud computing are experiencing major issues with security. Improved security in cloud computing will not only ensure security for information already stored remotely, but it will also promote the growth of cloud architecture (Subashini & Kavitha, 2011).
As technology grows, so will threats. To protect businesses from theft and malicious activity, network security must continue to grow as well.
Unfortunately, information technology professionals often struggle with building and maintaining consistently secure networks. One quick-fix is to implement security patches (Khouzani, Sarkar, & Altman, 2010), but these require constant updating. Within a business, the information officer must continually keep employees informed on changes in software applications, operating systems, and proprietary software. This increases the risk for missing software updates. Trainings can be highly technical, and the risk for user error is high. This opens the door for further risk. Not only must employees be well trained, technology management and trainers must commit to understanding the most current security options and implementations.
This study aims to discourage the norm of assuming the past and present security issues and focuses only on present security issues. While past security issues provide useful history from which to learn, the best network security is created by focusing on the most up-to-date options. This study will avoid surface skimming, which only covers the basics of the issues on security. Rather, this study will perform in-depth research in order to assess the most effective methods of business network security.
Implications for Social Change
The research has the potential of training and educating technology and security professionals in order to maintain and improve the security of business infrastructure which influence the stability of world commerce. Through studying vulnerability issues including viruses, worms, and malware as well as the hackers that create them, business and commerce security will be vastly improved.
This study may also contribute to the training of young professionals in the field of information technology. Through reducing the work load of information technology departments, efficiency of tasking and work load will be improved. This will allow young professionals to be more productive at work and to society at large by enhancing personal network security.
A Review of the Professional and Academic Literature
For the purposes of this study, an academic review was performed covering a range of information regarding network security including history, current trends, advancement measures, and types of threats. The research reflects the most current issues facing the field of network security and will focus on only major areas of importance to the goals of the research. The research reviews the history of the network security in order to understand the reasons for the security and related vulnerabilities as well as methods for creating solutions. The research was compiled from academic peer-reviewed journals and published books on specific network security subject matter.
The field of network security is in constant need of increased and improved software engineering practices to protect information from threats and attacks, and information technology professionals and business organizations must remain vigilant about protection (Information Resources Management Association, 2013). Creating networks hardware, software, and applications that are proactively protected from attacks is paramount to network security. When a network and its components are designed and structured securely, the system presents a solid defense and eliminates potential single points of failure. A well-engineered network is designed in such a way that changes can be easily implemented as the needs change and evolve. Liu (2011) found that the more often a network required changes and updates, the more likely it was for unintentional flaws to be introduced to the network. Hence, a network architecture designed to need fewer alterations and changes will ultimately be more secure than one needing constant updating.
In both small and large networks, security is a large concept, including the combination of over two decades of research and accumulation of knowledge on the various components of security. This research aims to cover on most of the areas on security and insecurity. A simple explanation of security is in contrast to privacy. Many people see privacy and security as similar issues. However, while privacy is a policy of compliance, there is no enforcement of the privacy without the methodology of security (Howard & Lipner, 2009). Research defines a number of key areas that require proactive countermeasures before attacks are exploited in order to maintain security, the most important of which is cryptography sessions (Shamoo, & Resnik, 2009). In order to enable secure cryptography sessions, it is crucial to maintain updated knowledge and understanding of program development processes, operating system controls, interface, and aggregation controls.
The reviewed literature is necessary in order to fully understand this study’s focus of determining new and evolving internet threat tactics and creating plans for prevention of security breaches. A historical understanding of network security serves to provide an understanding of the history of attacks, successes and failures in the improvements of network security, and valuable lessons learned in providing protection of vulnerable data.
Computer Network Defenses
The first area to examine is computer network defense. In particular, four major areas of study will be reviewed: security information and event management, business intelligence, slow and low intrusion detection and intrusion detection at large, and visualization aspects in the whole field of network security (Kar & Syed, 2011).
Security information and event management. Security information and event management (SIEM), is defined as the software, appliances, and managed services combining security information management (SIM) and security event manager (SEM) which provide real-time analysis of security alerts (Miller, 2011). SIEM systems combine data from events, threats, and risks in order to provide security intelligence. These solutions can be used to create a log of security measures on data formats as well as generate compliance and retention reports (Caviglione et al, 2013). Other uses for SEIM include data aggregation, correlation, and forensic analysis. Current issues facing SEIM include the development and implementation of high assurance systems designed to aggregate security data across several networks. The development of high assurance systems that have the capacity to memorize and store security data from wide networks provides a holistic view of network health and status. SEIM can also optimize the monitoring and involved processes of existing networks. Through this optimization, these systems may increase their efficiency and efficacy of providing security status and coverage area maintenance.
Business intelligence. Business intelligence (BI) is a set of theories, methodologies, architectures, and technologies designed to transform unstructured, raw data into business purposes. These tools and systems play a key role in strategic planning and statistical analysis for corporate security protection. BI covers a wide range of network security including artificial intelligence and robotics (Sabherwal, & Becerra-Fernandez, 2009). Through the process of excursion, BI produces meaningful and useful resources to help secure the business environment. Over the history of business, BI as transformed from manual transactions to applications, managed transactions, and network services. With the development of more efficient technology comes increased insecurity risks. Network security is a means of implementing business globalization with secure criteria. BI can process large amounts of information and identify and develop new opportunities in the market niche by providing a competitive market advantage and long-term stability.
Another important aspect of BI is the various software applications aiding in fighting insecurity. The BI application SAS Analytics was developed for advanced analytics and predictive analysis of business information. It works by integrating available data to provide important information to users in an easy point-and-click interface. Information is presented to the user in declarative statements and tables. However, this software, along with many BI applications, has inadequate self-defense. Since it works best on the Internet and is prone to attack, SAS and other BI applications should be complemented with network security systems (Poole & Mackworth, 2010).
BI frameworks serve to maximize the importance of historical background and present network defense data in real-time. Information technology professionals are working to implement and develop new systems that will formally allow organizations and businesses to track threats and receive warnings of impending danger (Cruz-Cunha & Varajao, 2011). The input and output operations of the read centric operations have the potential to be positively affected by the implementation of these BI applications.
Slow and low intrusion detection. Slow and low intrusion detection has gained significance in wireless network security. These networks are inherently vulnerable to attack and even more so with advances in hacking (Mishra, Nadkarni, & Patcha, 2004). To provide security, intrusion detection is implemented in the form of a software application to provide countermeasures in fighting intrusion. This ensures secure data in databases that are entrusted to networks. Intrusion detection applications work most effectively when paired with a second application running within the intrusion detection application that fights the intrusions once they have been detected. Currently, capabilities are being developed to utilize existing security architecture platforms and detect misuse of network functionalities.
Visualization. Visualization addresses challenges in the ever-changing field of network security by providing security analysts with better tools for discovering patterns, detecting anomalies, and identifying correlations (Goodall, 2008). There are many potential applications of visualization for network security including correlating intrusion detection events, identifying malicious activity such as worms and botnets, understanding the makeup of malware or viruses, and communicating the operation of security algorithms to IT professionals. This information can be systematically collected and used in forensic analysis after an intrusion occurs in order to understand how the exploit entered the system. Visualization applications can fill in the gaps where intrusion detection systems fail.
Malicious Code Analysis
Malicious code analysis is comprised of three main areas: static reverse code engineering, dynamic reverse code engineering, and the development and reporting of network defense software. Malicious code analysis is highly important to the creations of a secure network system environment, and its major objective is to fight network insecurity with software applications. Analyzing codes for malicious behavior allows organizations to gain a greater understanding of what is happening within the attack or exploit as well as the capacities of the attack. The downfall to code analysis is that it can be time-consuming.
Static reverse code engineering. Static reverse code engineering is the process of analyzing malicious codes or malware process by use of a sophisticated in-house analysis software. The engineering process can include syntax checking, type checking, and control and data flow analysis. The in-house software assesses the principles of the malicious code by disassembling and then recompiling the information with a low-level pseudo code used to determine program structure and flow. The code identifies the commandment set capabilities of the communication protocol and the communication endpoints including the hosts, IP addresses, and domains (Singh & Singh, 2009). Finally, the identification and the refusal of inbuilt code obfuscation is performed through executable packing algorithms and other anti-reversing or anti-forensic techniques and methodologies. Most static reverse engineering tools present the extracted information in graphic representation.
Dynamic reverse code engineering. Dynamic reverse code engineering serves as an 'end to start' mode of coding that tests application before implementing them. Unlike static reverse analysis, this process is executed in a controlled environment. It includes the execution of malicious code in a secure virtual environment to enable analysis of runtime behavior. Dynamic reverse code analyzes malware and identifies the changes necessary to repair the infected system and the particular artifacts that are indicating infection (Krogmann, 2012). It also serves to identify and diffuse the anti-reversing or anti-debugging techniques affecting malware runtime behavior. Lastly, dynamic reverse code engineering analyzes network activities and confirms malware communication methods and their correspondent endpoints (Dogru, 2011). The functionality of this reporting can be further extrapolated to the compilation and reporting of malware research findings. Malware reporting aids in the identification of unique malware characteristics necessary for the malware detection on other similar systems. Reporting involves scrutiny of the techniques of coding, usage of language commonly referred to as proficiency, and files format properties that identify the level of code sophistication the potential origin (Zubairi & Mahboob, 2012).
Dynamic reverse code engineering can be broken down into basic and advanced practices. Basic analysis consists of running malware in a controlled environment, tracking and understanding its activity, and cleaning up the harm caused. Basic analysis does not require specific programming knowledge and can be performed by people without specific IT training (Krogmann, 2012). Advanced analysis requires a greater knowledge of the internal makeup of the operating system including language and compiler codes, but it is much more effective in capturing and eliminating malicious activity.
Developing network defense software. Network defense software is developed and architecture based on findings from code engineering and is used to provide the best defense of network systems. It involves applying knowledge of malicious codes and their trends and learned concepts in order to provide support in protecting networks through the customization of existing security tools. This is completed through maintenance and customizing of in-house malware analysis tools in order to incorporate the new available trends detected from malicious coding techniques (Koch, 2011).
A review of network security includes cross-domain solutions such as research and development, architecture support, system support, and firewall technologies.
Research and development. Chen (2011) describes research and development as the creation of high-assurance systems that facilitates the provision and sharing of information. These systems, also called decision support systems, manage the support of chat across data repositories and aid in the decision making process of disjointed networks. Research and development also includes devices that allow for sharing web-based information across domains, systems that support lotus domino, and management and enabling of information replication in one-way links.
Architecture support. Architecture support is the foundation for this research. Architecture support is a series of processes that allows facilitators to conduct internal research on all processes and mechanisms by which equipment, information and services are protected from attack (Chang et al, 2010). The support is comprised of Department of Defense (DoD) coalition design and navy coalition architectures of security.
Through interviewing organizations and businesses to understand threats and risk in network security, this research will attempt to create network architecture that provides more effective and proactive solutions for network security risks and vulnerabilities.
System support. System support is composed of penetration testing of the existing cross-domain systems and configuration and policy guidance for the same cross-domain system (Chen, 2011). This can include services from design to administration including setup, migration of data, server security, routers, firewalls, and virus removal.
Firewalls. Firewalls play a major role in securing message passing through the nodes and host of a network by filtering through only desired content. Invented in the early 1990s, the name “firewall” was adapted from firewalls protecting buildings from the spread of fire (Meier, 2012). This protection quickly became the most useful program in the field of networking. Gollman (2012) defines a firewall as device controlling the flow of information between internal networks and the internet. Firewalls are software and/or hardware-based security systems that control the incoming and outgoing information and make decisions on what data to transmit based on IP addresses and port numbers. Firewalls establish barriers between internal networks and external networks such as the Internet.
Depending on the security needs of a particular network, firewalls are composed of different layers (Scarfone & Hoffman, 2009). Network layer firewalls, also known as packet filters, block packets from passing through the firewall unless they meet the rules defined by the firewall administrator. Application layer firewalls filter out packets sent from applications and block other packets. All packets accepted are scanned for trojans and worms before they are transmitted throughout the network. Proxy servers, which act as a gateway between networks, respond to input packets while blocking other packets. Lastly, firewalls can act as network address translation (NAT) which protects the host by disguising the IP address from potentially malicious applications or software.
Without appropriate cross-domain solutions, network insecurity can result in loss of information and privacy. Cruz-Cunha and Varajao (2011) found that organizations may endure losses of up to $500,000 due to internet attacks on data. Attackers seek out insecure organizations in order to attain monetary gain or cause harm. These undefined losses are not always recovered because it is difficult to identify attackers. Database companies storing large financial sums for customers must constantly update their knowledge on the latest forms of attacks in order protect their clients.
Cryptography, including encryption and decryption, is a highly useful tool in the protection of computer systems from internet attacks (Kollmitzer & Pivk, 2010). Cryptology involves the coding and transmission of data over the internet between two or more computer systems. The data is protected through encryption, or changing the message into an unreadable format, then decrypted by changing the unreadable message into readable text. With the protection of this security key, only the intended individuals can read the code and interpret the information.
Cryptography systems can be divided into two categories: symmetric key systems, sometimes referred to as the private key systems, and asymmetric key systems, also called the public key systems. Symmetric key systems use only one key during the process of encryption and decryption. The key is available only to the sender and the receiver. Symmetric key systems require that all computers or devices be known and pre-approved in order so that the key can be installed on each one. Devices that do not have the key installed will not be able to decrypt the information. Asymmetric key systems use both private and public keys to secure data. A public key is used to encrypt the data, but the receiver uses a private key to be used in decryption of the message. In order to decrypt the information, the receiving device must use both the public key provided by the originating device as well as its own private key. This method of classification is commonly used for messages forwarded to many different recipients (Joye & Tunstall, 2012).
Cryptography is monitored by digital certificates, or unique codes that confirm that each device is what it purports to be and then provides the public keys of each device to the other.
Current Challenges in Network Attacks
Recent changes in online market transactions have increased the technical sophistication and potential damage of security threats. The most recent threats are highly organized and extremely difficult to detect. The aim of these attacks is fraudulent capture of important data on network infrastructures. State-sponsored espionage and network sabotage are two common challenges for network security. Hackers and attackers managing the attacks are constantly searching for more complex means of network attack. Each new method is typically beyond the capability of network security teams to detect and control (Liu et al, 2013). For instance, a malicious attack in 2013 evaded detection by even the most powerful antivirus programs (Yin & Song, 2013). Similarly, a recent study conducted by a group of Semantic research lab members discovered 18 undisclosed and unidentified security vulnerabilities. These vulnerabilities were surviving within computer networks and had gone undetected for 30 months at the point of discovery (Bayuk, 2010).
Monster DDoS attacks. Monster Distributed Denial-of-Service (DDoS) attacks are attempts by hackers to make a computer or network resource inaccessible to users through interrupting or suspending services. DDoS attacks are becoming increasingly popular with many attackers and are growing larger as attackers learn and employ new knowledge and technologies. The DDoS mitigation firm of reported an increase in the number of the DDoS attacks by 88% in 2012 over 2011 (Baldoni & Chockler, 2012). The attacks studied included substantial increases in both the duration and depth of attacks as well as the impact on the attacked network. In order to protect against DDoS attacks, security teams need to be proactively prepared rather than reacting post-attack to prevent networks from being compromised and shutting down. Since the attack aims at barring services to the system users, the security team should prioritize prevention methods including risk assessment of attacks at unspecified times and development of immediate responses processes (Baylis et al, 2012).
Cloud computing. Cloud computing, or virtual storage, is a newer mode of information storage and is highly prone to attack. Within cloud storage, users can access computing power beyond the network capabilities. However, with this comes the need for advanced security measures. As with much of network security, regulators are caught playing catch-up to identify and solve security issues (Kaufman, 2009). Concerns include jurisdiction over data, government access, and single-entity versus multiple-entry data centers. Organizations utilizing cloud storage need to implement their own network visibility checks. Cloud computing technology exists within is a virtual, hidden server. This creates numerous possibilities for vulnerability and attack. Prevention methods must be employed to deal with risks including legacy systems and study periods aimed at researching possible attacks and solutions.
Passwords. Unwanted password access is a common security threat on the internet and commonly occurs on website breaches. Hackers gain the access to password rights through scanning the internet during user authentication (Savitz, 2013). During authentication, hackers can gain access through scanning the keys that the user has tapped to identify passwords. Password protected services such as secure Shell (SSH) and remote desktop protocol (RDP) are mostly affected by this behavior. Passwords as a security technology are reaching the end of their lifespan due to the ease of access by hackers, rendering password security nearly useless. This is a clear example of a security method that used to provide adequate security becoming insufficient due to increasing technology of attacks and hacks. The most effective proposed solution to these password fiascos is to set up an intrusion detection system in addition to relying on passwords for protection.
Insider threat. Insider threats commonly lack sufficient countermeasures based on the assumption by many business owners that outside threats are more dangerous and probable than insider attacks. While insider attacks are generally infrequent and irregular, they can be more destructive than initially assumed once identified (Probst, 2010). Most information technology organizations have failed to develop effective programs to fight insider attacks, mostly due to lack of agreement in regards to approach of mediating this challenge. Control programs are needed to detect various forms of attack. Organizations also need visibility into their internal networks security in order to identify suspicious activities in the network.
McCormac, Parsons, and Butavicius (2012) report that he majority of inside attacks were motivated by revenge (56% of cases studied), financial gain, dissatisfaction with the business or organization policies, and a desire to take inside information to new companies. The main goals achieved are sabotage (47% of cases) and financial gain and theft of information and property (42% of cases). It is important for business owners to understand why employees would potentially attack the network from within in order to have an appreciation for the risks inherent. With this knowledge and appreciation, owners will likely invest more time, energy, and money into prevention inside attacks.
Web cookies. Tracking web cookies, or pieces of data stored on a user’s browser while using a web site, is a common means of obtaining records of individual browsing activity and history. Although cookies do not carry viruses and are unlikely to install malware on a host computer, the history records can be used to identify vulnerable points in a networks. Non-encrypted cookies major network security issue because they render systems prone Cross Site scripting. Cookies can allow hackers access to login data which creates major network vulnerabilities (Moschovitis & Poole, 2005). The existing solution to this vulnerability is sufficient but inefficient as it forces users to re-login any time they need to access data or directories. New research will be helpful in discovering solutions which encrypt all network cookies and encode an expiration time.
Plain Hashes. Hashes are processes that convert input into output and do not allow the original to be recovered. Unlike encryption, the passwords with plain hashes cannot be decrypted. Plain hashes, a means of indexing and retrieving information in a database, are used in many complex encryption algorithms (Lakshmiraghavan, 2013). A “salt” is a known secure method of encryption which is introduced into hashes in order to make attacks extremely difficult. Salts allow the hash to be written as an algorithm that cannot be translated back into the original text. With a large enough salt, even if an attacker gains access and compromises a database or record table, it is nearly impossible to gain access a second time. The best method of ensuring safety in regards to hashes is hiding the salt or encryption key to keep the hacker from entering the network through the salt encryption. Without a salt as an encryption mechanism, there is not adequate security on the existing network as passwords can easily be decrypted back to the original text.
Share hosting. Share hosting, or web-hosting residing in one server but shared with more than one website, is when many customer’s websites are located on the same server, and the servers resources are shared among all of the sites hosted within it. Share hosting is not the most secure solution for a business with confidential or protected data. In such a system, each site exists on its own partition to separate it from other sites (King & Baeza-Yates, 2009). This methodology is generally the most economical because of the shared costs, but it limits security and privacy. When an attacker has managed to get into one area of a hosted website in a shared server, it is more likely to gain access to all websites on the server. Organizations are better protected with dedicated server hosting or secure cloud hosting.
Adaptation framework. In real-world situations the framework used for a research study must be able to incorporate the unpredictability of events that take place. The researcher understands that the subject of study is “complex and changing” (Patton, 2002, 42).
Summary and Transition
Section 1 has set the foundation for the research. Some basic definitions concerning the research have been defined in detail in this section. In addition, the limitations, delimitations as well as assumptions have been outlined in this section. Academic literature on network security in businesses has been described in order to understand how new security mechanisms can translate to secure network systems based on the individual needs and vulnerabilities of the network. Based on the literature review, security conditions and policies can be established and deﬁned. The problem statement of addressing the best practice for determining new and evolving internet threat tactics and create the most effective plan for prevention of security breaches is based on the growth of internet insecurity. The study utilizes a qualitative methodology under which a model will be devised as a prototype including three phase stages to completion. A brief description of the significance of the study has been emphasized along with its implications for social change.
In Section 2, the project is presented along with a detailed discussion of the basics laid out in Section 1. The research and its applications are highly dependent upon these foundation and principles.
Section 2: The Project
Section 2 builds on Section 1 by defining and explaining the nature of the study as it relates to the literature and research review. This section defines the procedures for obtaining participants, explains the design and methodology of the study, analyzes the reliability and validity of the study, and connects the information obtained through the literature review with the application to business practices and implications for change.
Computers and data communication networks are an essential resource for many organizations. Their use in business, academic, and government entities is ubiquitous. Business and organizations see these resources as a set of devices, software, and technology that work together to implement security policies. Organizations need complex network security solutions such as firewalls and anti-attack programs to provide adequate security. Rather than relying on older security methods, new methodologies must be consistently researched and implemented to stave off new types of attacks.
The issue of security in computer networks has been researched extensively, but there is a lack of quality outcome on specific aspects of the research. This study will use a qualitative method of data collection in order to gain in-depth understanding of the issues and needs of the participants rather than specific and defined concepts. Data collected with this methodology will provide conceptual data from real-life experiences in business network insecurity. The research aims to understand both areas of familiarity and unfamiliarity within business information technology in order to provide specifically tailored solutions. Through developing a thorough understanding of business vulnerabilities, history of attacks, current solutions, and evolving internet threat tactics, this study seeks to develop the most effective plans to prevent security breaches and ensuring business privacy.
This study has both a main goal and a secondary goal. The major goal is to determine new and evolving internet threat tactics and designing methods of preventing these threats. The secondary goal is to research the history of insecurity within each network in order to understand the origin of threats in order to have a better understanding of appropriate solutions. The research aims to describe security issues relevant to general research on internet security as they are relevant to future studies.
Role of the Researcher
In this qualitative study, the researcher serves various roles including facilitation, design, analysis of data, transcription of data, verification, and reporting. These roles are described and explained here.
The researcher facilitates all aspects of the study including obtaining participants, explaining the nature of the study, gathering and analyzing data, and presenting the findings. In a qualitative study, the researcher plays an interactive role with the participants. In this particular study, the researcher will act as interviewer to gather data from the participants. The researcher will obtain participants, explain the nature, design, and purpose of the study, and conduct interviews.
The researcher has the responsibility of the entire design of the study. This includes multiple components including determining the problem statement and purpose statement, identifying background issues, reviewing the current literature, and structuring the study. The design phase is crucial in qualitative research as it creates the foundation of the study. Planning, methodology, constraints, timeframe, and participants are all determined and explained in the design phase. Additionally, sampling methods and all phases of data analysis are determined in the design phase. Qualitative design validity is enhanced by accuracy of data transcription, agreement among the research team on data findings, member checking, and active searching and reporting data discrepancies.
The researcher provides an analysis of the study’s requirements in order to determine the necessary tools for completing the research. The researcher determines the system requirements and necessary measures. The analysis stage determines the lengthy of the study as well as categorizing and naming the research phenomenon. In this case, the research phenomenon is the network and network components. This allows those who read the reported research to understand at whom the study is directed. Analysis includes multiple facets such as interim analysis, reflective notes and memos throughout the data analysis process, coding of data by dividing it into units of meaning.
The researcher transcribes data collected from the study. In this study, interviews will be recorded and subsequently transcribed to text. Findings are transcribed into written form to allow detailed study and correlation to analytical notes. While the actual transcription may be performed by research assistants, the researcher is in charge of overseeing the process and ensuring that all transcription is correct. The researcher will instruct the assistant on the process of transcribing. The transcripts should be performed in such a way that they capture all sounds and words that are relevant to the research topic.