07 - Understanding Issue Results

A discussion of all of the information contained in Tenon's issue results. [11:48]

Transcript

  1. Let's take a closer look at the results that Tenon presents to us. I'm going to go over here to history. History is, as the name implies, a history of all of the tests that you run. I'm going to go through this a little bit more closely. We see here, of course, I've selected history, it says test history here.
  2. On the right, it says show and by default it says show all projects. But if I wanted to see the test history for a specific project, I could just select it here. At this point, I only have one project so it's a little bit not terribly useful at the moment, but that's that.
  3. We see here this is part of the random report string, then it'll tell us here that we did this test three minutes ago. This is the HTTP response code that the API provides. 200 means it ran fine, tells us the URL that was tested. It tells us the project name, which is default project at the moment, how many errors, how many warnings.
  4. On the right, I have a button that says view results. If I click on view results, we get the stored test results for this test run. Like I said before, this is our share link. All the high-level information, how many issues were found, what the URL was, so on and so forth is up here. The ability to download a CSB is here and our graphs and stuff are on the right.
  5. What we want to focus on for this video is the actual results themselves. Tenon's API response actually includes a fair amount more than what is conveyed here. One of the things to keep in mind about Tenon is that it uses a headless browser. It submits its response back as JSON and that when you're using the Tenon website, you're actually using a client of the API itself.
  6. This is pulling from that JSON response and massaging it into this format, and we've chosen what we want to show and not show and stuff like that.
  7. I want to go through, in detail, how these issue responses are formulated. This is basically a two column table, error and description. The error side of things shows us the code that had the error. This is the snippet of code that had the problem. On the right side is a description.
  8. We also provide this, this is the test ID. We disclose, transparently, which ID each test applies to for a couple of reasons. One is if you create your own API client and you want to filter because you don't like that result. Here's a great example, TID-26 is actually going to be taken away. It's going to be deleted in a couple of changes that we're ready to deploy because this isn't really a big deal.
  9. This is gross using the center tag or font tags, but there's no real massive impact to users with disabilities so we're getting rid of it. That's important, though. Let's say there's something else you disagree with. If you disagree with it, we'd actually like to hear it, but you can still make use of this in other ways. You could, for instance, filter on this.
  10. You could actually sort them by the test ID or you could group them together, whatever you want to do. That's one of the things that makes an API really powerful is that you have that control and so we disclose that information for you.
  11. On the right side is a description of the issue. We give you a couple things here and I'm going to skip around. We give you the line number that that was on. In the document source, it's line 56. That's not as useful as it could be. It's pretty useful in cases where you're testing static HTML, but if you have a lot of JavaScript or server-side rendering of some kind, that may not be as useful as we'd like it to be.
  12. But you can, if you were to save the source and look at it, that'd be line 56.
  13. This is the title of the issue. This issue says that the image is missing an alt attribute. Of course, as we can see here that is true. This is the entire image tag here. It does not have an alt attribute. We call this an error. We display things that are both errors and warnings. I'll talk about that in more detail later when we talk about certainty scores, but suffice it to say this is an error.
  14. We provide a priority score of 95. Again, I'll talk about the priority scores in another video. We give the standard that was violated, this was WCAG Level A 1.1.1, and then here's a brief description of the image. It says, "All images must have an alt attribute. Not supplying an alt attribute will mean that users who cannot see the image will not understand what the image conveys."
  15. Underneath this is recommended fix. This content is only available to people who are logged in. If you click on recommended fix, you get the full best practice information for this specific issue. We get the best practice, the best practices provide alt attributes for image elements. This supplies a long form description of what the issue is and why it's an issue.
  16. We also tell you about the test. Test ID-9 is an automated test for images without alt attributes. We disclose the description of the test. It says, "This test looks for images that do not have alt attributes." We provide remediation guidance. Sometimes it's very brief, sometimes it's longer. I'll get to the prioritization in a second.
  17. Code samples, impacted populations, reference information, standards, so on and so forth.
  18. Now back to this, prioritization. Tenon contains, for each test and best practice, a series of factors that are calculated into a priority score. We see here that this has a high user impact and that is because, obviously, if the image has no alt attribute then it will have a high user impact. If you can't see it, you won't know what it is.
  19. Repair speed, is this something that takes a long time to repair or something that doesn't take much time at all? It does not take much time at all so it has a high repair speed. Impact on the interface. What is the impact on the interface for creating an alt attribute? It is none.
  20. This has what we call a raw priority score of 39. We saw in the previous screen that that actually has a priority of 95 percent. What happens is, during the test run we normalize all of those priority values, rank them accordingly from top to bottom, and so we can see here this is a 95 percent priority over some of these others that have 86, 38, and so on.
  21. Now you know -- here's a great one, 100 percent -- which ones need to be fixed first. You fix the ones that have the higher priority score.
  22. Finally, the last thing I said I was going to disclose is certainty. I'll talk probably elsewhere about certainty but I don't want to leave you hanging. As I said, there are errors and warnings. We determine what is an error and what is a warning based on the certainty score. A certainty is a number from 0 to 100 that tells you how certain Tenon is that it has found an actual issue.
  23. This one, the certainty score is 100 percent because we are 100 percent confident that this has no alt attribute. We give that a 100 percent certainty and that 100 percent certainty means that it's an error.
  24. Some things may not be. We may not have 100 percent confidence in. There may be cases where the test is unable to determine the context of the error and determine whether it's really a problem or not. A great example would be if there was really long alt text on here.
  25. We have a test that says you shouldn't use alt attributes that are too long, but we can't really tell you whether that's a sufficient alt attribute or not. It may be the kind of thing that needs that kind of long alt attribute so that's going to have a lower certainty score. We're going to say basically the things with lower certainty, they look fishy but you have to go through as the consumer of the content and determine whether that's really, actually an issue.
  26. Final word on that. Automated testing tools are...they have a reputation of creating what are known as false positives. We at Tenon are extremely concerned with false positives. We do not want to have false positives. We are going to create tests that have lower certainty scores but we don't want those to be bogus tests.
  27. We don't want those to be just tests that we have there because we think we want to find something and list everything we possibly can. We'd rather not do that. Automated testing tools are not meant to be judges. They're meant to be diagnostic tools. If we have created a false positive in your results, we want you to tell us.
  28. We want you to email us or use the contact page on our website to tell us about a specific test. You can reference that test ID and tell us why you think that was a false positive. We try to avoid those as much as possible and we need your feedback for those cases where that's not the case.
  29. That's it. That's understanding the issue results. We'll talk a little bit more about some of these things later on in other videos, but that's really the nitty-gritty on issue results.