Security Testing – Beware Of The Risks And Prevent Catastrophe
When it comes to software development, security testing should be taken seriously.
Adobe found this out the hard way in 2013, when one of the biggest hacks in history resulted in the loss of 153 million account details. Hacked information included usernames, email addresses, credit card details, and encrypted passwords. Analysis showed Adobe had been using some questionable encryption techniques, particularly where passwords were concerned. Passwords in particular shouldn't be encrypted, and instead should be 'hashed' and 'salted' (an explanation for that here). If Adobe had, perhaps, adequately tested for security flaws in its software, this data breach wouldn't have made for such damaging global headlines.
So what is security testing?
Security testing is where testers check an application or software product to identify whether it’s secure or not.. It's about making sure that in the event of a malicious attack, your product is covered. Security testing helps to identify if your product is vulnerable to attacks or whether it's easy for people to hack into your system and breach your data. Putting security testing measures in place helps to determine if your product protects its data, while also functioning as intended.
Why security testing and risk management are important
Failing to properly test the security of your software can open up both your software and business to huge risks. Depending on the nature of your business or product, you could experience a huge number of attacks on, or breaches of, your software. Data could be lost or stolen. Hackers could find their way into your software. Or unauthorised users could gain access to confidential areas of your product. Worse still, you could experience system-wide security failures.
These risks, when realised, can result in lost revenue, customer dissatisfaction, data inconsistencies, or even legal issues.
Even if your product has undergone extensive functional testing, it's imperative to test the security aspects of your product. This is especially important if it houses sensitive and confidential data. Risk management of software is an important component of the testing process. Software risk is typically regarded as a composition of security, performance efficiency, robustness, and transactional risk throughout the system.
Crowd testing offers the opportunity to minimise these risks by utilising people at scale.
Imagine creating a test cycle. That test cycle is announced to a global pool of testers within 24 hours. And within hours of launch, a huge number of diverse testers using a wide range of devices, are actively seeking out security flaws. You have 24/7 access to the crowd testing platform to monitor and review in real time, what is being reported back to you. This is quicker than any other testing option allows. You're can then fix any problems faster, which further protects your business and your product from people attempting to breach your software.
Had you left the test cycle up to one or two software developers or consultants, this process would take much longer. They might have dedicated 50-80 hours to testing, but for one or two people this could take a number of weeks. The longer timeframe opens up your product to significant risk, while also slowing all of your systems down.
Six key security areas that may require testing
Typically, there are six overarching areas that should be covered during security testing. These include confidentiality, integrity, authentication, authorisation, availability, and non-repudiation. You may need to test all six, or just some, of these areas. This depends on the specifics of the software you are developing and its security requirements.
Does your software require exchange of confidential information? Confidentiality testing helps protect this information and ensures it is only disclosed to the intended recipients.
It's important the information held within your software retains its integrity. The information should be protected so that it can't be altered by unauthorised entities during transfer.
Integrity testing is similar to confidentiality testing. The difference is that integrity testing adds information to communication testing, which forms the basis of an algorithmic check. Its purpose is to check if the correct information is sent from one application to another. Confidentiality testing encodes all communication.
Authentication testing refers to ensuring the identity of a person using the software is correct. It could also be about ensuring a computer program, device, or application, is trusted. Authentication testing also refers to ensuring a software product or user is what it claims to be.
Authorisation testing refers to the process of ensuring that a particular request made to software is permitted to happen. For example, a user may request to receive a service, perform an operation, or log in. An example of authorisation is access control, which refers to selective restriction of access to a resource, place, or software. Authorisation testing helps to ensure this function is performing correctly.
Availability testing refers to the availability of information and ensuring it is available to authorised users if and when they need it. For example, routing errors could result in certain users being incorrectly unable to access parts of a website, network, or software. Availability also refers to the speed a website or software loads.
Non-repudiation testing, where digital security is concerned, is about ensuring a transferred message or document is sent and received by the parties at either end. That is, non-repudiation is a way to guarantee a sender of a message can't deny having sent it. Or that the recipient can't deny having received it. It also ensures the message or document hasn't been modified during its transmission.
Other risk management and security testing suggestions
Some software or systems may involve data during the security testing phase. It could be worth using unidentified, masked, or even fake data, rather than live user data, during the testing and development process. This is especially imperative if the software product is already live and active.
A penetration (pen) test is a process that can help you to identify security weaknesses in your system. It involves an authorised simulated attack, showing you how easy it is to gain access to the system's features or data. Pen tests can be automated or performed manually. They involve a reconnaissance process, which collects information about the target (your software). They identify potential weak points of entry. Then, a pen test manager attempts to virtually break in and reports back on their findings.
Pen tests, for the most part, are used to determine software's security weaknesses. Pen tests may also be used to test an organisation's ability to identify and respond to security problems, its employees' security awareness, or security policy compliance.
Other security testing technologies
A few other security testing technologies could include data loss prevention, access management, or encryption testing.
How crowd testing security can help
Like all technology, the nature of software security risks changes rapidly. Defending your software requires vigilance. Testers must anticipate and correct all possible vulnerabilities.
Conversely, attackers or hackers only need to find one flaw to carry out an attack.
One way of tackling the issue is by choosing vetted crowd testers that are registered ethical hackers and members of security groups such as OWASP, NULL and DEFCON. They can help increase your chances of finding problems and fixing them before the attackers do because it expands your testing reach and speed.
Get in touch with crowdsprint today to talk about how our global pool of testers can help ensure your software or application is secure.
Double your test coverage for half the cost!
Now that you know all the advantages crowdsourced testing offers, check out the benefits it can bring to your business.