Ijeoma Ezeonyebuchi works on the NPR One mobile application, validating mobile applications and the backend services that power them. Her efforts across multiple mobile platforms aim to improve manual and continuous testing practices.
Content has been edited and condensed for clarity.
Increment: How do you explain what you do?
Ezeonyebuchi: It’s really hard! Essentially, I figure out what’s not working well with software and how we can improve it.
What don’t people know about QA?
There’s this big misconception that people who work in QA are not as technical as developers—that we just pick out all the things that are wrong. In my experience, people who work in QA (or as test engineers) have varied backgrounds, including software development.
We’re also very detail-oriented. We’re making sure that the entire system works, rather than a specific piece of code.
How do you test software at NPR?
We do both manual and automated testing, but it’s not necessarily all done by test engineers. Developers write tests as well. But almost everything we put out will go through a round of manual testing to verify that it works.
There are a lot of things automated testing can’t capture, especially for mobile. How do you account for all the variables that aren’t necessarily code? For mobile, this includes your OS, or where you are when you’re using your device. We can’t predict all of that.
What’s different about mobile testing?
With web, you can make an assumption that people will update their browsers. So if you’re making a decision about testing, you know most people are using Chrome and Firefox. As mobile testers, we make those decisions too, but there are a lot more things we can’t control. How many people are on Android but are using an older version of the OS? A user could also be on a five-year-old device, or connecting to CarPlay. Mobile apps have to build to those preferences.
What do you automate? What don’t you?
It varies a lot by the situation and the specific feature. We could probably write some kind of unit test for part of a Bluetooth audio feature. But it’s also probably a lot easier to test the device itself—to press the play button.
Effective automation is all about prioritization. We cover what makes the most sense first: repetitive tasks that we imagine are not going to change very often, or features that are pretty simple. Then we’ll work toward automating more complex functionalities. Anything we can’t automate, we’ll test manually. There should be a balance.
Automation can only capture what you know with the tools you have.
What are the limits of automation?
You can only ever automate what you actually know. That’s a big limitation, especially for mobile. We talk about the features we’re going to develop, but we can’t predict what every single user will do on every single device.
And automation can only capture what you know with the tools you have. Sometimes cross-browser or multi-platform testing doesn’t work. Sometimes your testing framework doesn’t recognize certain attributes—then your options are to write code to enhance the testing framework or to test manually. There is no testing framework that will complete every single test you have in mind.
What would you change about the development and testing process? Are there certain tests, or types of testing, more developers should do before software gets to QA?
Definitely test your code before it gets to QA! Different companies can do different things, whether it’s a code review or running a unit test, but there should be a clear path for what developers should accomplish before they send code to QA, and a clear definition of “done.” Requirements help developers and QA engineers easily determine not only how to build tests, but also what to test.
What’s the most valuable thing about testing?
It’s always good to question things: how you write code, how you view code, how you find issues. Thinking about the unexpected is the best way to test efficiently.