Loading
Interviews with Experts Bonus 25 exercises
interview

The Depth of Software Testing with Jessica Sachs

Jessica Sachs, software engineer at Ionic and expert in testing and QA, discusses her unique journey into the field of software testing.

Kent and Jessica initiate the conversation by dissecting different testing paradigms. They cover a range of approaches from Test-Driven Development to component-driven and end-to-end testing. A notable point is the "testing trophy," a concept that stresses the importance of balancing different types of tests.

They both express reservations about shallow rendering and advocate for tests that closely mimic user interactions.

Both Kent and Jessica recommend TypeScript and static analysis tools, emphasizing their role in elevating code quality and developer productivity. They introduce an interesting perspective on "use case coverage," urging the testing of real-world scenarios rather than just achieving high code coverage metrics.

Resources

Loading interview

Transcript

00:00:00 Kent: Hey everybody, I'm super excited to be joined by my friend Jess. Say hi Jess. Hello. So Jess and I go way back. My goodness, we have known each other for a while. I always try to think of like, where did we meet and how did that relationship start? I think we probably go back to like 2015, 16

00:00:20 timeframe.

00:00:21 Jessica: Yeah, that's been a minute.

00:00:23 Kent: Yeah, it's been great. And it's just been a pleasure to know you and keep up with all the developments in your career and what you're doing to make the dev world a better place. So thank you for everything you do. I'd like for you to give a little intro to yourself and so people know who you are.

00:00:42 Jessica: Cool. My name is Jessica Sax. I've been in the testing space and doing QA in various forms, all the way from manual iPad testing, through working at Cypress and shipping Cypress component testing.

00:00:57 Kent: I

00:00:57 Jessica: led the E2E team there for a brief moment and have been really interested in how people work in general. And testing is a big part of that.

00:01:07 Kent: Yeah, awesome, awesome. So pretty, like not just I test at work, but I built a testing framework. So.

00:01:19 Jessica: I think contributing to V-test was like my fourth 1. Like I think I got, I was number 1 on stack overflow back when that was like cool for Appium, which is the mobile Selenium test driver for like 3 years. That was my achievement.

00:01:36 Kent: Wow, yeah. So you're kind of into this. What is it that draws you so much to testing?

00:01:44 Jessica: I accidentally got good at it. Yes, I can relate. I accidentally got

00:01:48 Kent: good at it. Yes, I can relate.

00:01:50 Jessica: I accidentally got good at it. I started Wicked Young. I started when I was 16 doing manual QA, like full-time. I dropped out of high school, got a job when I was 16. And so this has been thinking like a user and managing risk. That has been intuitive to me because I started very early in my career.

00:02:10 So I'm accidentally good at it, I guess.

00:02:14 Kent: Well, do you enjoy it? Or is it just like, I can't leave?

00:02:21 Jessica: I enjoy teaching people about it. I enjoy the satisfaction of... I enjoy a little bit of the satisfaction. I've never really liked TDD is 1 weird thing about me. I guess like...

00:02:36 Kent: Yeah, we share that actually. That's pretty interesting.

00:02:39 Jessica: Yeah, I don't really like TDD, but I also like, I don't know. I like making testing as unapparent as possible. I don't like shifting workflows. I like building naturally for a problem. And testing is something that has to be done for many kinds

00:03:00 of applications. So I enjoy a bit about it. I don't know. I enjoy component driven development more than I enjoy testing.

00:03:09 Kent: Yeah. Yeah. Actually, I'd love to talk about that for sure. I think what sometimes people miss or especially when they're just getting into testing, the thing that is a light bulb moment for people is when they realize, oh wait, so like writing, there's

00:03:29 nothing special about tests, it's just like any other kind of software that we write. We're just automating a process, which is what we, that's what software is for, is automating processes. And so, yeah, I think whether it's writing tests so you don't have to manually check things or writing a feature so you don't have to manually keep a record

00:03:49 of all the users who bought, you know, your stuffed koala, whatever it is. I think it's, it's all just writing software, which is fun. And yeah, so let's talk about component driven development then. So if we're not doing test-driven development, then what is component-driven development then?

00:04:07 Jessica: So component-driven development is, I would say that we build a lot of components day-to-day. That's like kind of the main thing that front end developers tend to do at this point is they think in components.

00:04:23 Kent: Yeah.

00:04:25 Jessica: Component driven development is isolating the contract, both styles, markup and props events. Maybe if you're using a different framework. I come from a Vue background, so we have events. We also have things like slots, which are like children, and that I

00:04:45 like to think of as a function almost, like a rendering function that you would pass in. So component-driven development allows me to architect the contract of the component both from a style perspective, from a market perspective, and an accessibility perspective. You generally develop in a sandbox. Yeah.

00:05:05 Kent: Cool. And so, yeah, so what is it about component-driven development that helps you be so productive?

00:05:13 Jessica: I would say, the styling. Mm-hmm. Understanding the boundaries of styling the same way that you would understand the boundaries of an API. I think people understand functions and things that do logic fairly easily. You

00:05:33 don't really need to, you might not need to think of the contract that heavily ahead of time, but things like styling, nobody really thinks about the contract between parent and child when it comes to visual stuff.

00:05:48 Kent: I think with components, because like you said, it's a sandbox, it makes it easier to not have to have the entire application in your head. You can just focus on the area that you're in. So what is, yeah, go ahead.

00:06:05 Jessica: You can also start to see some things creep in. So for example, if you're trying to render a very simple presentational dialogue, you might notice that you're mocking a network request and you ask yourself at that point, it's annoying to write, for example, mock service worker

00:06:25 network request stub. Like that's annoying. Why am I doing that when all I want to check is if the X button fires a callback. Right? Like that's weird. Why am I stubbing this very domain specific network call? You ask yourself questions when you break it down like that.

00:06:43 Kent: Yeah, yeah, I think that makes sense. So what, I guess that kind of leads into the next question I want to ask you about, and that's what does component-driven development have to do with testing?

00:06:53 Jessica: So both of them produce a sandbox. Both environments require you to fixture your application and its components therein such that they can be rendered in environments that are not your app. That's where people struggle the most in testing.

00:07:13 So when you do component driven development, you're getting the workflow of building an isolation in a playground, you're getting some gains there. And then when it comes time to test, you've already done all the hard work of mocking. You just, yeah.

00:07:30 Kent: Yeah, that makes a lot of sense. So when you're working with that kind of a workflow, are you typically using a tool that facilitates the isolation, like Storybook or something?

00:07:41 Jessica: So there are many tools that facilitate the isolation of components. There are like ones that do it with iframes. There are ones that do it by rendering your entire application. The first thing I ever built was actually, the first test

00:08:01 runner I ever built just loaded a browser and changed routes for every component. It was a very weird, interesting idea. It's called the feather test. I don't know if I've hidden it from my repo or my profile out of embarrassment, but, basically all you need is a window object that's sanitized.

00:08:22 It's the same thing that you would need in a test or a test environment. From there, there are 3 to 4 vendors, I would think, Storybook being very presentational component heavy, and Cypress and Playwright being very application

00:08:43 facilitating, right? They were built as end-to-end test runners, so they have all the primitives necessary to drive the application, as well as tell you information about it. They have retryability, is a big thing.

00:08:57 Kent: Yeah. Cool. So with component-driven development and like component-level tests, do you find that you end up writing most of your application tests at that level of the component?

00:09:18 Jessica: I do. I think that's a side effect of the way that I build applications is not very page-based. I might only build single-page applications. If I was working in a more meta-framework sense, I would probably find myself leaning towards leaning towards more end-to-end

00:09:38 tests or being very comfortable mocking out full-page navigation. So I would lean towards something like playwright in Cyprus where it's very simple to go between the contexts of I'm rendering something Presentational that's a button that doesn't Navigate, you know, it's not a submit button within a form

00:09:58 basically, right when you get into multi route navigation page level testing, depending on your meta framework, things get a little weird.

00:10:08 Kent: Yeah, yeah, that is, things can get weird. Yeah, so I think that's actually really consistent to the way that I typically develop. So when I was at PayPal, 1 of the last things that I started working on was a component library, and we used Storybook, and

00:10:28 everything that I was doing was component tests, because of course, I'm not building an app, I'm building components now. But especially with Remix, I have found myself, so I made the testing trophy concept where integration tests is like the main amount of stuff that you're writing. And I would put component

00:10:48 tests into there. They're kind of sit between integration and unit. They're certainly not a unit test, but they're not quite an integration test. So I'll just lump them in with integration. So anyway, we're integrating with a framework if we wanna be.

00:11:06 Jessica: People are coming up on the word integration. They like always like to label the kind of test and it's like, Yeah.

00:11:14 Kent: It depends. I honestly like, even as the creator of the testing trophy idea, I don't actually care too much about the distinction between these different types of tests. But I would say that with the meta frameworks, I have found my trophy getting a little top heavy and I do write a lot more end-to-end tests than I used

00:11:34 to. Just because not only is it simpler because I don't wanna just worry about 1 side or the other, I'm doing full stack rendering and everything. But our tools have gotten better too. And so yeah, I would say that I'm along with you. If my deliverable is components, then yeah, of course I'm gonna be doing a lot more

00:11:54 component testing. If my deliverable is an application, I'll component test the complicated stuff that's very domain specific, you know, components, but for the rest of my components, they're pretty like Lego blocks that were already tested elsewhere. And so I ended up just covering those as part of the end to end test.

00:12:13 Jessica: Something that I think you've written about before that in the quality exchange, quality engineer stack exchange group was very common is if your technology, your underlying technology changes, but your tests also have to change, that's not

00:12:34 great. And clearly your tests will give you security without needing a major rewrite, right? So if you're- Yeah, you

00:12:43 Kent: know- Oh, sorry, I interrupted you.

00:12:45 Jessica: If you're thinking about things like, I'm building in Next or Nuxt, and therefore I need to mock the next image tag to be an image, which is what has to happen for things like Storybook and Cypress component testing because we're not thinking in terms of the meta framework, we're thinking in terms

00:13:05 of bundling, which is such a front end concept, like the user doesn't even know what bundling is, like at all. But the user does know what, you know, open this URL and enter the form data and click submit means. And at the end of the day, that's what your project manager or

00:13:25 product owner cares about. Technically, you shouldn't be coupled to those ideas.

00:13:32 Kent: Yeah, 100%. In the workshops, the very first exercise in the testing workshop is writing your first end-to-end test, and I started out by telling people, you should write, whenever you're thinking about writing a test, you should pretend you're a product manager telling a manual tester

00:13:52 what steps to do to verify things are working. Because the manual tester isn't gonna understand, or maybe they understand it, like manual testers are very smart, but they shouldn't have to understand the actual implementation. And if you follow that pattern, then you can keep your detail. I like to say

00:14:12 your test should be implementation detail free and refactor friendly. And that's...

00:14:20 Jessica: I like that. Yeah. I remember, I guess back in the day, Around the time Ruby was really popular in a few years like Rails was really popular in a few years before, there was this concept, actually, this is earlier, this is like 2004, there's this word called Gherkin. We're

00:14:40 talking about expressing ideas and tests as if they were English to make them very understandable and to think outside of technical implementation. And so back in the day, around 2004, there, please fact check me.

00:15:01 I was writing a talk called the history of modern front-end testing recently. So I was googling, maybe I'm thinking of particular language binding, but the idea is that you have given this scenario when the person does this, given when, then.

00:15:22 Then something should happen. So in the given state, you define a scenario, when actions occur from the user, ideally, then an assertion should be made. And it looks like English. You know, it reads like a sentence. And I found that very fascinating. It is brittle for other reasons.

00:15:42 It's not brittle, it's difficult to debug for specific reasons. But the concept kind of stuck with me. It's like, wow, you could hypothetically write English that outputted a requirements CSV doc that you could hand to some healthcare agency. And That's actually 1 of the benefits of using something

00:16:02 like Gherkin. It's impossible to debug. I do not recommend it.

00:16:06 Kent: Yeah. Yeah, we actually did something similar to that at my first job, first developer job called Domo. The QA engineers wrote up a bunch of Gherkin tests that were just like, just the, not the implementation of the test, but just the given when then. And then every release,

00:16:26 they would like, they would just divvy out all of the tests of people and we do it manually. And then over time, they would write the implementation of these and so we'd have to do fewer and fewer manually. It was awful, I hated it, but like it was necessary because otherwise

00:16:47 we wouldn't have had any tests probably. So.

00:16:50 Jessica: It's the automating of processes that you're talking about is when, and I think people who started with teams that only had budget for manual QA, This was much more common before developers took ownership of the testing stack. People who transferred were the seam between

00:17:11 user requirements and making sure, validating that they worked, and developers that did the code, there was this big gap between the 2. As we got more mixture, I think people started to understand, instead of just checking that the list handles users with these particular fields,

00:17:31 an admin user, instead of just manually logging in as the user's name was Steve Silver. Steve Silver to make sure the app doesn't crash when it handles 10,000 accounts, we could just automate that. Technically at that point, it wasn't possible to automate, which is the other thing that was interesting in the early

00:17:52 2010s, around when Appium was released, it was impossible to automate iOS app testing. That's why you had a lot of testers.

00:18:03 Kent: Yeah, that makes a lot of sense. 1 thing that you said earlier that I thought was pretty interesting was something about like your test shouldn't know about what framework you're using under the hood and you should be able to swap out the framework and the test should still run. And that actually is a big part of

00:18:23 what made the testing library take off so much, or as much as it did, was because about it, Testing library had been out for about a year, I think, maybe 2. And it was definitely gaining popularity, but a lot of people were happy just sticking with Enzyme. And then React Hooks came out,

00:18:43 and that's just an implementation. So your tests with testing library, they would work either way. You could refactor from classes to hooks and your tests should still pass. And in fact, the idea is that the test verifies that you didn't break anything in that transition. So it's a huge benefit. But Enzyme, because it's so

00:19:04 implementation detail specific, that's just like the design around it, nobody could use their Enzyme tests and upgrade to Hooks. In fact, I don't think Enzyme even still works with hooks and that's because nobody's really working on it now but

00:19:19 Jessica: working on it I met him the other day yeah I asked him I asked him what the feature is about enzyme that keeps him maintaining it

00:19:29 Kent: what is that

00:19:30 Jessica: it is shallow mount It is the exact thing that has the problem that you're talking about. Yeah. It's a benefit. It's a very interesting. You will also see GitLab's codebase almost exclusively uses shallow mount. View test details promoted it heavily after because it was written around when Enzyme was written.

00:19:50 And that's no longer the case. I think Lachlan and I, who both worked at Cypress to create Cypress component testing, very much were like, shallow mount is not great for refactorability.

00:20:03 Kent: Right. Yeah, I have a blog post titled, Why Shallow Rendering? Or, Never Use Shallow Rendering, or something like that. Very, very strongly. I used it once, long enough to realize this is a terrible idea, and never used it again.

00:20:19 Jessica: I came to component development fairly late in the game. I spent a lot of time in vanilla JavaScript development. So there was a lot of time, I think, until 2016 or 17, where I didn't even know what a component was. And mind you, this is like 6 years in my

00:20:39 career, and I don't know what components are or component testing is.

00:20:44 Kent: There's still A lot of people like that in our dev world right now. Yeah.

00:20:49 Jessica: And I'm thinking about things in this user perspective. And I'm like, well, if you change the tests at the same time as you change the source code, how can you know? Yeah. Like, how can you know? So the best test in the hypothetical, I always think in the hypothetical, right? If you can have the best stuff. Best test is your product owner that

00:21:09 knows everything about the app telling you this is perfect. Well, the best, best test is testing and production, which is something I want to talk about. Yeah. Which is a valid strategy depending on your company.

00:21:23 Kent: 100%.

00:21:24 Jessica: It's super weird, but I've done it to success. When it comes to the hypothetical best test, you want to change as little test code alongside your source code, and shallow rendering prevents that entirely because it's, by design, you're passing in mocked functions

00:21:44 into each prop, basically, if you think of it in that context.

00:21:49 Kent: Yeah. Yeah. So don't do it, folks.

00:21:52 Jessica: If people tell you to unit test your components don't don't because it doesn't matter if the unit of the component test works it matters if the user sees it on the page and can click on it. That's the thing.

00:22:06 Kent: Yeah, now wait, so where does that mean unit tests are worthless or where do they make sense?

00:22:13 Jessica: Unit tests make sense when your user is a developer. That's what I would say. In which case you can write both component or integration-y tests that test the side effect of rendering the component, which is the DOM updating or,

00:22:33 or, and depending, you can also test a shallow rendered component for unit coverage. But remember that unit coverage doesn't actually give extra value to the user. That thing that gives extra value. The only thing that matters to the bottom line is can the user hit submit when they're trying to give you

00:22:53 payment information at PayPal.

00:22:56 Kent: Yeah. Yeah, you know, I would say that There's not really a really great use case for what I would call a unit test for a component. Because like, if we're gonna be dogmatic or pedantic about this, then to unit test a component, you're mocking the framework.

00:23:17 That's the only way you do it. So terrible idea, don't do that. But- Don't

00:23:21 Jessica: do that. Java background people will be tempted to do that through heavy usage of Mockito and stuff like that. So depending on your background, you might also have a few influences where it's fashionable to mock everything, but don't do it.

00:23:37 Kent: Yeah. But where I find unit testing to be very valuable is lower level utility functions that are pretty complex and stuff like that. Especially when it comes to pure functions and stuff, like really like unit testing that stuff. But even un-pure things, so like things that depend on the date,

00:23:57 for example, like if you're generating a one-time password, I have unit tests on my library for that. Stuff like that can be really very useful for that kind

00:24:07 Jessica: of thing. When you have to mock something that's not guaranteed to be consistent over multiple runs of the component or function, sorry, of the subject under test is the jargon that you might hear. S-U-T for people who don't spend time in QASAC exchange. Yeah,

00:24:27 so things that you have no control over because you don't own the whole system and you shouldn't. Your code doesn't care if it's on a Linux box. That gives you a UTC time zone.

00:24:36 Kent: Yeah, yeah. So a lot of people, when they started adopting TypeScript and like these static testing tools like TypeScript, it start to feel like testing is much less valuable. What is your take on that? Okay,

00:24:57 Jessica: it depends. It depends on who your user is and what kind of code you're writing. So if you're writing code to be consumed by, so there's 2, there's 2 benefits. There's many more benefits to TypeScript. 2 major, 2 major benefits are selfish. 1 is your own productivity

00:25:17 and your teammates. The other is user benefit of TypeScript. So the user benefit for TypeScript is what you're gonna tell your boss. Selfish benefit is what you're gonna say, like it will improve velocity. So from a quality perspective, static analysis tools such as linters and TypeScript, for example,

00:25:38 live on the base of your testing trophy. Mm-hmm. Yeah. Live on the base of your testing trophy. I could go on about that. And they prevent your users from accidentally putting in bad data, depending on how you're using it. So

00:25:58 application developers can rely on TypeScript for a large portion of quality by using different kinds of schema validators. It depends on the tooling that you're going to use to leverage the full power of TypeScript for type safety, like true safety.

00:26:19 The other side of the coin is JavaScript library authors whose users may not be using TypeScript. They have to do a lot of, have to, they really should do a lot of runtime checks to make sure the data types you're passing in aren't wrong and the schema you're passing in isn't wrong. But for application developers, I think you gain a lot of safety

00:26:41 by using TypeScript.

00:26:43 Kent: Yeah, I do think that What TypeScript did for me was it eliminated a bunch of tests that I wasn't writing anyway. So, like, if I was making a theoretical sum function that's taking 2 numbers and adding them together, I would write a test that ensures that the value returned is correct,

00:27:04 but I probably wouldn't write a test to say, well, what happens if they pass a symbol or what happens if they pass an object? I wouldn't have written that test. Yeah. What is it?

00:27:14 Jessica: A QA engineer walks into a bar and orders a beer or does 999 beers orders a goat something like that. So many flavors of that joke and I picked the 1 that was off the top of my head.

00:27:28 Kent: Yeah, that's

00:27:30 Jessica: funny. But yeah, just using your code that you know how it's intended to be used in completely wildly incorrect ways, like you would never do that. So by setting strict types, you like eliminate that entire class of problems.

00:27:43 Kent: Yeah, 100%.

00:27:44 Jessica: Something I was going to say, I find 1 of my favorite types of bugs to catch using TypeScript, this is universal for all web developers, is icon names. So 1 of the tools I use is called Iconify. It uses class-based

00:28:06 names to say, I-MDI-arrowup. Cool. They also have component versions of those. The implementation doesn't matter. The point is that I can write a TypeScript type literal that says I'm looking for any of the valid keys in this JSON

00:28:27 enum of valid icon names. You can generate that really easily and it eliminates icon typos, which are actually extremely difficult to catch and very brittle. The only way you can catch those most of the time is by taking a picture of the entire UI. And you're dealing with things like

00:28:47 is the Chevron up or down and did I type Chevron up and you know Instead I went like Chevron down that I did I put the order etc, etc And so I have to interact with the component twice and take photos twice versus using a type.

00:29:05 Kent: Yeah, yeah. And actually the Epic stack does exactly that with having a list of valid icons. So yeah, This has been a really fun conversation with you, Jess. We're coming close to the end of our time. Is there anything that you really wanted to talk about that we haven't gotten to yet?

00:29:24 Jessica: I'm curious what your take on code coverage is. I get that question all the time.

00:29:29 Kent: Yeah, that is a good question. So code coverage for those who, the uninitiated is where you do something special to the code that is evaluated during tests so that you can track the code that was run during your test. And the object here is to

00:29:49 know which parts of your code you're actually evaluating during your test. And make sure you didn't miss any use cases and things. And that's precisely where I think it's valuable, is thinking about the use cases. I find if you're very focused on code coverage, then you're looking at lines of code and you're thinking, oh, okay, now I need to pass

00:30:09 a 9 as an argument here to make sure I end up in that place there, where you should really be thinking about, okay, so in what situation would a 9 end up going through that? And let's think the test title that you write to cover that should resemble a use case rather than some part of the implementation. Because what

00:30:29 sometimes can happen is maybe your code is capable. I actually have a blog post about this called how to know what to test that goes into detail about use case coverage. But you can sometimes write your tests, even if you're trying to stay free of implementation details, you can write your tests and the titles of the tests

00:30:49 in a way that kind of references the implementation. And then when the implementation changes, maybe your test doesn't change as much, but because you weren't focused on use case, you are going to preserve that test continuing to run, even if no part of your application cares about that particular thing anymore.

00:31:10 And so, yeah, anyway, I...

00:31:15 Jessica: You allow people to have code that's not valuable because you're just thinking about, oh, these lines of code have to execute because of numbers, like code coverage needs to be 100%, but you haven't stopped to think, wait, does 9 happen anymore? That's kind of what I'm thinking, Taking away from what you just said.

00:31:34 Kent: Yeah, that's exactly what I mean. Yep, and so I want, there's not really a metric for use case coverage. There's no way to automate that. Like you'd have to ask your product manager, what are all the use cases, and then every time you run the test, make sure those are covered. So it's, yeah, hard to measure that. But that's what you should be targeting.

00:31:57 And code coverage I find in a library, yeah, 100% cover all those lines, I think that makes a lot of sense. In an application, I don't really ever give a percent that I think is reasonable for an application, except I will say that 100% is probably not. Unless you're writing

00:32:17 code for autopilot on a car or an airplane, then I think 100% leads to really poorly written tests.

00:32:28 Jessica: I definitely agree with that. Do you want to hear a fun fact I learned this last year?

00:32:34 Kent: Yeah, I'd love to.

00:32:35 Jessica: So I took a break from Cypress before going to Ionic. I took a nine-month break to work in a healthcare startup. What I learned at the healthcare startup is that quality is obviously very important. Not surprising. I introduced the tool I had just written at Cypress, Cypress component testing. And I found out

00:32:56 that their quality process is to download the entire repository onto a USB stick, mail the USB stick, all repositories that they consider source code, which is a little fuzzy when it gets to open source code, you're like, But everything that they consider your code, you put on a USB stick and you mail it to the FDA.

00:33:16 And the FDA will review your tests and they will review your tests, like go line by line and probably have Ken C. Dodds' best testing practices on the left side and their editor on the right with your code and say, I don't think you should be looking for this specific ID on a div to test,

00:33:36 to test something. They will give you comments and the FDA will give you code feedback on how good your tests are.

00:33:43 Kent: That is crazy.

00:33:44 Jessica: Yeah. It's, it's wicked interesting. And I thought to myself, I was like, I don't know how I'm gonna feel if someone tells me I'm Cypress component testing incorrectly by reading my own content. And I was like, I would have done a bad job by not adhering to my own advice.

00:34:03 Kent: That is so funny. Oh my goodness. That is so interesting. But yes, there are some industries where code quality matters a lot more than others. And I have a blog post about, called, Should I Fix the Bag or write the test? Or something like that. Where the idea is how do you prioritize writing tests and writing new features?

00:34:23 And the outcome there is just, it's gonna be different depending on the world you're living in. Because they, writing new features and shipping new features and writing tests, they require the same currency, and that currency is our time. And so if you're spending

00:34:43 time working on tests, you can't be spending that time working on features. And depending on your environment, like the stage of your startup or like the industry your startup is in or your enterprise, whatever, the return on that investment is gonna be different. So very contextual. Sorry, it's not a science, it's art.

00:35:05 Yeah. Yeah. Well, cool. Jess, this has been so fun to chat with you. Thank you for giving us some of your time. What's the best way for people to keep up with the stuff you're working on?

00:35:14 Jessica: Sure, I'm very active on Twitter or X under my tag name, which you will probably link, if I'm guessing, yeah, underscore Jessica Sacks. So my name as a private variable. And then I have a domain if you wanna join my Discord. It's my name is a shell script it's Jess.sh.

00:35:36 Kent: Nice. And

00:35:36 Jessica: you can find the discord link there for more, for faster feedback.

00:35:41 Kent: Awesome, well thank you Jess and it's always a pleasure to visit with you. Thanks everybody for watching and we'll see you all later.