I have been a tester my entire life, although I didn’t really know it. I chose it as my career only two years ago. I always loved to look at how things work and see under what circumstances they function and break.
At one point, I decided to explore this with people and got my master’s degree in psychology. After a few years of counseling, I decided to do a career shift and joined Slido, where it soon became my job to assure the quality of various things.
Two years ago, there was no designated tester at Slido. Not that it did not need one. A lot of developers’ energy was used to find and fix bugs and releases were delayed. Despite this, the company was growing fast, so building a QA platform was inevitable.
This task was assigned to me. I knew that as a team, we wanted to achieve more than creating a tester’s seat. I realized that quality assurance goes way beyond finding bugs in the product.
There are many areas where you can improve quality. It seemed that a good place to start would be to look at the biggest challenges we had at the time. To name a few, we wanted to:
This is the story of how I tackled these challenges. And when I say “me,” I mean me and the other folks in our team, since at Slido, we always tackle challenges and find the best solution together.
The nature of Slido is quite special. If you are an event organizer, you don’t use Slido daily. Normally, you open it twice. First time – a month, week or day before the event. And then you open it around 7 minutes before the event is about to start. Your blood pressure will go sky-high if something doesn’t work.
You instantly click on a green “chat with us” button, or give us a call and wonder, how is it possible that everything was working last week and isn’t now?
We all know what happened. The new version is out. But the client’s event is jeopardized and there is not much that can be done at such short notice. Even the great people in our helpline team would sweat blood to save a client’s event from a faux pas.
This was the first thing I needed to prevent. The obvious solution at the time would be mindlessly testing all, even unrelated features, with each release, just to make sure nothing breaks down.
It didn’t take much time to realize this is slow and ineffective.
As a company, we wanted to move fast, but when you move fast, you break things.
We just couldn’t afford to do that with our clients. If something is not working for an hour, it may be the critical hour, right when their event is happening.
I decided to create a list of key features that must work 100% of the time. I would check them before, and (just to make sure) after the release. At the time, I didn’t know this was called a sanity check. But it was the sane thing to do. I found that out the hard way, but it was my way. With this, I could be sure that even if something slipped through my fingers, it would not put our clients at risk.
But going through the key features manually is still not the most effective way. It’s also slow, gets boring very fast, and the bigger the product gets, the more features are considered as key ones.
For that kind of workload, I knew I would need to have more colleagues. I eventually did, but we still needed to make our development more test-driven. That’s why our developers write unit tests for every new feature as a part of their sprint. As for me, I write an end-to-end tests that cover the most crucial key features.
Even with reducing risk to a minimum, there is no way you can be 100% sure before release. That is the reality.
You just don’t know what you don’t know.
When a problem occurs right after release, it is painful for me as a tester. Somehow, I always tend to find myself saying, “Damn, I should have thought about that.“ Luckily, we are able to revert to the previous version in a matter of minutes.
Sometimes, the change is just too big to go back, or too important. In these critical cases, we roll out features gradually, over a few weeks.
That way we can reduce the risk and act very quickly impacting a minimum number of our users.
I sit in the same room as our developers. It makes a lot of sense. Conversations about features that are being developed are a vital part of our work. They guide me to test better. In exchange, I help them get their tasks done faster. By doing that, we create good work habits on both sides.
One simple example is testing a new feature on Internet Explorer. You just know that anything you do in CSS will create a zombie apocalypse in IE. All testers know that. And they will remind the developer every time.
So, to save time and energy, developers learn new tricks on how to prevent these types of bugs as they are writing the code. And they are getting really awesome at that. As a result – we cut feedback loop time in half.
But I decided to take this one step further.
I realized that the most expensive thing in the development process is a slow feedback loop. If a feature was ready to be tested on Monday, and I started to test and feedback it on Wednesday, I was way too late.
That’s why I aimed to start testing each feature as soon as possible. Even if the feature was not complete, it would still cover at least some parts that were complete and needed to be tested. That actually required much more learning on my side. I needed to understand what I was testing. I didn’t have a technical background but my curiosity and eagerness to find out how the product works took care of that.
Soon enough, I learned about Angular, nodeJS and gained a pretty good knowledge that would help me understand different parts of new features.
For example, say our developers were creating a new form where the user would input a name and some description and upload a photo. In the early stage of development, we may have our markup ready, validations will be working, but the functionality for uploading a photo will still need some work to be done. It makes a lot of sense to start testing the form at this stage. Mostly because most of the time there are different developers working on each part of a feature.
In agile development, it makes a lot of sense that the testing feedback does not only go per-feature but per-developer.
Slido wants its clients to have a successful event. When something goes wrong, they turn to our helpline. When they encounter a technical issue that they are not able to address, the typical thing to do would be to ask developers for help. Although it may seem like a good thing, I realized it wasn’t helping with the speed of development.
In most of the cases, the helpline didn’t really need the developers’ help to make the client’s event successful. The solution may lie in finding a workaround or helping the client in another way.
Working with the product every day, I realized I know it better than most people in the company. I knew under what circumstances the product functions and what breaks it. That is why I decided to share my knowledge with others. That way, I helped to empower our helpline team to deal with all kinds of technical issues.
The helpline team is now able to ask the proper questions when troubleshooting, help to find workarounds when needed and even create a comprehensive bug report.
If there is still a problem, they come to me and we look at the problem together. That way, even if we find a bug, we can assign a priority to it and evaluate how urgent it is for us to fix it.
It is good to know that your work has an impact. I work with our product manager every day and discuss the product’s future as well as the issues I have found or that came from our clients. Our main goal is to make everybody as effective as possible. When we do our work well, the helpline team spends less time troubleshooting, the developers work effectively and the quality of the product is raising.
The most important discussions, however, happen at the end of the sprint. Even with a fast feedback loop, there will still be some bugs two days before the release. In the past, we made an effort to fix them all so we could ship a bug-free feature.
That sometimes meant delaying the release.
Today, we choose a different approach. I meet with the product manager and we go through the list of issues and assign a priority to each and every one of them. We evaluate the time and effort needed to fix them and make a decision about which ones we want to have fixed in those two days, and which ones we don’t. We typically leave out only minor issues that won’t impact clients’ events. Then we release it.
After that, we start a microsprint (usually takes 2-4 days) to fix those last bugs. Meanwhile, we get feedback on the new feature from the rest of the team and from our clients – add this, rename this button, our users were confused with this, and so on. We add these to the microsprint, make another release, and we are done. At the end, we have a high quality, reviewed feature out that has no bugs (or almost none).
For us testers, I think it is vital to know what makes “our” product work and what breaks it. But I think we should not focus only on finding bugs in the code, but on building quality everywhere we can.
That is how I have always thought about what I do at Slido.
I try my best so we make sure our clients get the best experience at their events. I help our developers deliver high-quality code and provide them good feedback. I share knowledge with my colleagues and help them deliver world-class support. I care a great deal about the future of Slido and I am very happy I am able to help shape it.
As the Slido product grows and we strive to deliver a world-class experience to our customers in everything we do, we are looking to add talent to our QA/Testing team as well. If you are up for the challenge, be sure to check out the opportunity here. We would love to get to know you!