#Quality

For Apps with love, quality is when our customers are satisfied with our service across the entire development process of a product. Quality management and quality assurance are factors of success for Apps with love. 

Quality management and quality assurance

To be successful in the marketplace, it’s not enough just to develop the right tools, apps and software. Quality is a crucial component for the success of a digital product.

Our internal quality assurance process helps us to clarify the most important questions about responsibilities, risks, requirements, acceptance criteria and system contexts at an early stage and to plan the quality assurance measures in order to prevent the occurrence of errors in the system during an early phase of the project.  

Keep Bugs out of Production

Imagine you not only have a good idea, but the right budget, a good development team and the project is well within time planning. Although the concepts, design and technologies are all in line, errors do creep in during development. The later such errors are detected and eliminated in the development phase, the greater the costs are. For example, an error that has to be fixed in a productive system costs up to 500 times as much as when it can be detected and eliminated in the initial phase of a project. For this reason, quality assurance from the very beginning of an idea, during the conception and design, as well as in the development and productive operation, is a matter close to our hearts.

Requirements are the heart of testing

We test both the functional and non-functional requirements of your digital product, irrespective of whether it’s a mobile app, a progressive web app, a website or an IoT gadget.

Functional requirements

Deine funktionalen Anforderungen bringst Du eventuell bereits in die Entwicklung mit. Beim Erheben und Definieren deiner Anforderungen, sogenanntes Requirement Engineering, sind wir gerne für Dich da. Aus den Anforderungen heraus erstellen wir User Stories und definieren die Akzeptanzkriterien. Diese funktionalen Anforderungen sind die Grundlage für das Erstellen von Testcases, anhand deren wir das Testing durchführen und dokumentieren.

Non-functional requirements

Non-functional requirements describe the way in which and under what conditions a function should behave. Typically, an underlying specification for a requirement is determined with non-functional requirements. Functional requirements can be divided into performance and quality characteristics and marginal conditions. 

A typical performance characteristic would be, for example: a system can process 1000 requests per minute without any limitation of the available resources. 

A typical quality characteristic would be, for example: for the quality assurance, the quality models in accordance with ISO 25000 are employed. 

 A typical marginal condition would be, for example: the system must be developed completely in compliance with GDPR. 

Don’t worry – we’re happy to help work out the non-functional requirements together with you. 

The right testing methodologies matching your needs

Are you worried about bad reviews and feedback for your digital product? Does your product have to conform to internal company quality standards? Should there be assurance at the end of your development that your product complies with usability requirements? Been there, done that. 

We’re happy to help you with a targeted selection of tests that increase the customer satisfaction of your end users and ensure that all functions are checked and re-checked. 

  • Functional testing

    In functional testing, the function of a product is checked to ensure that it does what is defined in the requirements and the corresponding acceptance criteria. Input is executed and processes are run through to see if the results correspond to the expected results.

  • End-to-end testing

    The so-called end-to-end test is a technique to test whether processes within a digital product behave from beginning to end as expected from the viewpoint of the user. In this, several user stories are tested end-to-end with the goal of ascertaining system dependencies and ensuring data integrity between different systems. Interfaces, communication with other systems and databases are also included in the testing process.

  • Compatibility testing

    An important component of the testing is the compatibility of an application or a website. The functionality and the design should work on different devices, browsers and with different operating systems and platform versions. Compatibility testing checks to see that the functionality behaves the same way or if deviations occur or if the function can’t be used at all. Since there are many different sizes and resolutions of screens, the design and layout should be checked to see how they behave on the mobile devices and browsers. The size of the display, the CPU and RAM also often varies depending on the device. Especially on older devices, errors can quickly crop up. Before development gets started, the target group should be clear so that the devices, operating systems, platform versions and browsers on which the application or website should run can be defined.

  • Performance testing

    Would you like to know if your software runs perfectly even under increased use? Performance testing can be subdivided into two categories: load testing and stress testing. As the name suggests, in a load test we measure the performance of the system under a defined load (user requests). In doing so, a certain number of users (for example, 1000) is simulated accessing certain components of the software and executing actions at the same time. From this, we can analyse how applications and servers deal with an increased number of requests (for example at peak times) and if the system works perfectly under increased traffic loads. In contrast to the load test, a stress test tests the system with a maximal load to gain insight into potential optimisations of the system. The aim of a stress test is to stress the system with a very high number of users and to bring it slowly to a total breakdown. In doing so, we observe how the system reacts after the breakdown and if it can recover by itself. In addition, it is also possible to make a statement on the number of simultaneous interactions it takes to bring the system to a standstill.

  • Security testing

    With security tests, we check applications and systems for weaknesses and security gaps which could be exploited by hackers or unauthorised persons. We highlight potential threats and risks in order to avoid the loss of confidential information. Possible security tests are, for example, vulnerability scanning, penetration testing, risk assessment, security audits or ethical hacking.

  • Usability testing

    We want to simplify the lives of our users. Usability signifies the user friendliness in the use of interaction systems. In usability tests, the effectiveness and efficiency as well as user satisfaction are measured and any confusing and obstructive issues analysed. “Real” users are invited for a usability test in order to review the products and systems for their usability. At an earlier stage of development, heuristic evaluations with usability experts are also suitable. Through usability testing, the vast majority of issues can be detected at an early stage and redesigned or made user friendly in a (more) cost-efficient way.

  • Test automation

    Automated testing is worthwhile for large and extensive projects in particular. Automated tests allow the expenses and efforts for regression tests and often repeated testing to be reduced. It is important to draw up an automation strategy regarding the software quality to be achieved before beginning with automation. It is often the case that the focus is too strongly centred on the GUI, so we generally recommend taking the automation levels of the test pyramid into consideration. Here, the majority of tests to be automated are at the lowest level (unit test) and second-lowest level (API and integration tests). Automation at the highest level (GUI level) should be kept to a minimum where possible, since the maintenance costs of the test scripts are extremely high and conducting the tests is very labor intensive and costly. In addition, the right framework/tool has to be used and the automation environment must be in place. The quality of a software can be increased dramatically using automation, but automations are very complex and are often underestimated. Don’t hesitate to ask any questions you might have. We’re happy to advise you.

  • Unit testing

    Unit testing is a software testing method in which individual units of the source code (so-called "units") are tested to determine whether they are fully functional or not. Unit tests play a central role in smart contracts, since the source code cannot be changed afterwards.

  • In-the-wild testing

    The aim of in-the-wild testing is to use an application under real conditions. We test your application or website by putting ourselves in the position of the end user. Imagine, for example, you develop a sports app and the app is only tested in the office under laboratory conditions. We go out with your app and the accompanying sports watches and trackers into the natural, sporting environment, and we test out the features or put these through their paces in defined scenarios and test cases in order to scrutinise all functions under real-life conditions. The in-the-wild tests allows us to test the entire digital experience.

Mobile device lab

Our mobile device lab contains well over 100 Android and iOS devices as well as quite a number of smartwatches. Among other things, we test the software products we develop on real devices. Testing under real conditions on real devices and different operation system versions is relevant, for example, in order to test the interaction with device sensors, GPS, Bluetooth or the interfaces with external devices and machines. It is also important to have a visual impression of the digital product and to be able to test functions such as scrolling, transitioning, gestures and movements.

We regularly check the sales and usage figures of the most important and most-used devices and operating system versions on the market. A good mix of real devices, simulators and emulators should be defined in every quality strategy in order to achieve high device coverage.

Quality Management Team

In theory, he can take apart airplane turbines and put them together again – during the flight. But that got boring after a while and now he’s our man in charge of technical quality assurance. We’re saved!

Martin Mattli

Head of Operations & Quality Management
Would have loved to have been a marine biologist, if it weren’t for that pesky underwater vertigo. Having studied psychology and biology, she now researches another type of bug and experiments with usability in the jungle at Apps with love.

Maud Cottier

Quality Management
User Research
Overview
Martin Mattli
Head of Operations & Quality Management
In theory, he can take apart airplane turbines and put them together again – during the flight. But that got boring after a while and now he’s our man in charge of technical quality assurance. We’re saved!
Maud Cottier
Quality Management
User Research
Would have loved to have been a marine biologist, if it weren’t for that pesky underwater vertigo. Having studied psychology and biology, she now researches another type of bug and experiments with usability in the jungle at Apps with love.
Wir haben grad gemerkt, dass du mit Internet Explorer surfst. Unsere Webseite sieht damit leider nicht so schön aus.

Du willst erfahren warum das so ist?
Wir haben darüber geschrieben.

Zum Blog

Du brauchst Hilfe bei der Umstellung?
Melde dich. Wir helfen gern.

Kontakt

Einen neuen Browser installieren?
Hier gibt es Auswahl.

Browser