16 - Dashboard

An explanation of Tenon's dashboard. [09:12]

Transcript

  1. Let's take a closer look at the dashboard. I've done a little bit more testing on this account, so now we have a little bit more data to take a look at.
  2. This is the dashboard. The dashboard provides you with a quick overview of the performance of your pages that you've tested. At the top here, we see three fields meant to filter out our results.
  3. The first one is the project selector. This allows us to select between all of our projects, which is what's currently chosen, or any of our other projects. Naturally, because I only have one project, all and default project are going to be the same, but if I selected default project, then the results would be filtered accordingly.
  4. Next up is start date and end date. Start date and end date, as their name implies, is going to be the start and end of the data that we're looking at. We could filter this literally by day. We could filter it by week, or whatever the case may be, just by selecting these values. By default, they go back a month. More specifically, they go back 30 days.
  5. We see here that this is selected between February 14th and March 15th, which is today. Because this is a brand-new account, there's only one day's worth of data, but if you had long-term data in there where you wanted to track the performance of a specific project across a specific period of time, you could do that there.
  6. Below that is the dashboard summary table. It contains eight pieces of information. I'll go through these on the left column and then the right column.
  7. First up is total distinct pages. That's the total number of unique pages that have been tested. That says 29, but below that, it says total number of successful test runs. This is successful test runs of any kind, regardless of whether they're duplicates. We'll see here, this is 30. That's because we tested that Google URL twice.
  8. Unsuccessful is how many URLs or how many test runs were not successful. Right now, that's zero. Average errors per page, right now it's 46. Average warnings per page, that is currently zero. Average issues per page, 46.
  9. You should definitely expect that the warnings are going to be much lower than the errors because, again, we try to make sure we have a pretty high certainty score on our tests. Average issue certainty and average issue priority are listed here as well, 100 and 100 apiece, accordingly.
  10. Below that are two charts. One is a line chart. One is a bar chart. The one on the left is the number of issues per kilobyte of source code, by day. The next one is distribution of density.
  11. Let's talk really quick about what density is. Both of these actually measure density. Density is, as this one implies, the number of issues per kilobyte of source code. The reason why we use density as a performance indicator is basically because not all pages are the same.
  12. You may test a page that has 10 issues in 100 kilobytes of code. That would not be as bad as, say, a page that has 10 issues in 10 kilobytes of code. In one instance, you'd have a 10 percent density. Then the other instance you'd have 100 percent density.
  13. This would be a line chart of performance by day. It's mostly useful in cases where you use Tenon a lot because you'd be able to track that performance by day. If you don't use Tenon a lot, you'll see lots of really high peaks and lots of really low valleys. That's one thing to keep in mind.
  14. On the right side is the distribution of density. We see here that we have density being tracked in a number of buckets, 0 percent, 1-10 percent, 11-20, 21-30, so on and so forth, all the way up to 100-plus.
  15. Now this is really useful, especially when you've done a lot of testing, because basically you want to look for the bars on the left, especially on these three, to be higher than the ones on the right. That would be these three.
  16. You want to do that because these are less dense. They have less problems per kilobyte than these do. If you see a lot, especially on this last column, that is a project that has a lot of problems.
  17. Next up is the worst performing pages. This is a list of the pages that have the most issues or the most error density out of all your pages. I did some random testing of the dc.gov website and found that they have 50 issues on all of these pages.
  18. Chances are these are actually duplicate issues. These are probably issues that happen on multiple pages. I'm guessing that, when you have that much consistency, these are probably template-related issues. In other words, they're not related to the content itself but rather the template, like the header and the footer.
  19. We can sort this. Obviously, sorting this table at this time would be not terribly useful because they all have the same information.
  20. As we talked about just a second ago, duplicate issues are probably an issue with this page. We can see that here, as a matter of fact. Duplicate issues are pages that have the identical cards. In other words, it's the same exact code causing the same exact problem. These are linked texts with identical titles.
  21. As a matter of fact, now that I recall, the dc.gov site uses Drupal. Drupal has this bad habit that WordPress used to have, which is they would have title attributes that say the same thing as the link. Now this whole thing makes sense to me.
  22. Going back down to the duplicate issues, this is a count of the issues that are most frequently found to be duplicates. This is important because you can use this information to identify practices that are pervasive across your site, that need to be fixed with a higher urgency. We provide the title, the count and the percentage here.
  23. Issues by test ID is, as the name implies, the count of issues per test. This could probably be more clearly worded. This is actually issues by test. We have a list of those here.
  24. This can be sorted as well. If I hit this, I can see here that my most frequent issue is this linked text with identical title. That happens to be 87 percent of my issues, and then on down. Again, these are going to be indicative of production practices that we should address more aggressively than others.
  25. As we see here, we've only found one image with a missing attribute. We definitely want to fix that, but we'll probably try to figure out what's going on here so we can have a faster impact on the product by doing that.
  26. Finally, issues by WCAG success criteria. This lists all of WCAG's 61 success criteria. It tells us for each one of those success criteria, how many issues were found and what the percentages of those issues were.
  27. This is important for those who may be working in an environment where they are required to comply with WCAG. You'll need this report for that. You can also download a CSV file. We go all the way through all of these and disclose those.
  28. That's that. That is an overview of...