Eliot Siegel, M.D.
Editor’s Note: At events like SIIM 2012 attendees must juggle learning sessions, networking activities and exhibits. Hopefully you had time to tour the scientific posters displayed throughout the meeting space that feature the innovative research being done in the field of imaging informatics. If you missed the poster presentations, Dr. Eliot Siegel , Professor and Vice Chair at the University of Maryland School of Medicine, Department of Diagnostic Radiology, as well as Chief of Radiology and Nuclear Medicine for the Veterans Affairs Maryland Healthcare System, shares an overview of his team’s poster on testing monitor performance.
Repurposing a Traditional CAPTCHA Challenge-Response System
to Assess Monitor Performance Metrics Including Contrast and Spatial Resolution
Jigar B. Patel, MD1; Stephen J. Siegel, BS2; Joseph J. Chen3, MD; Eliot L. Siegel, MD1,3
Baltimore Veteran Affairs Medical Center, Baltimore, Maryland1
University of Maryland Baltimore County, Baltimore, Maryland2
University of Maryland School of Medicine, Baltimore, Maryland3
The most frequently asked question over the years, and this SIIM 2012 was no exception, has been about the use of “off the shelf” in comparison to “medical grade” monitors. There has been a substantial trend to cut costs not only outside the radiology department but also within the radiology department and utilize these much less expensive “off the shelf” monitors.
The compelling argument for the “off the shelf monitors,” of course, is that they can result in major cost savings, especially in a medium to large healthcare facility. The strong argument for “medical grade” monitors is the image consistency, ability to more easily calibrate using the DICOM grayscale presentation function, higher luminance, and easier monitor testing which could provide documentation in the event of a medicolegal challenge.
The difference between the best “off the shelf” and medical grade monitors is probably relatively small, but there are no diagnostic imaging consumer reports and manufactures of the “off the shelf” monitors can vary significantly as vendors change manufacturer or other components.
We presented a poster that describes a rapid and easy way to test any type of monitor and this has allowed us to see surprising variability in the monitors that we use in our own department, whether they are medical grade or “off the shelf.” The solution is based on a challenge test that can be given to a user to determine whether he/she can use a PACS workstation or, alternatively, could be used to report to a PACS administrator that the monitor is not meeting a given standard for display.
Rather than using the SMPTE (Society of Motion Picture and Television Engineers) pattern that we are all familiar with (below) we used a challenge similar to the CAPTCHA challenge that is meant to distinguish a human from a computer in order to get access to data or programs on web sites.
Users are presented with a six letter word that is written in almost black on a black background and almost white on a white background as well as a six letter word written in a small font. The PACS administrator can determine for various types of users (e.g. Radiologists, technologists, clinicians), locations, or monitor types what percentage of deviation the black writing is from the black background (e.g. The SMPTE pattern uses 5%), the percentage difference of the almost white on white background, and the size of the font.
So when a user first signs onto the system, the user is asked to read and type in the three 6-letter words corresponding to the black, white, and small font challenges. The administrator gets the results of this challenge test which could be used to block use of the workstation or more likely to audit and identify sub-optimal monitors.
We have found the tool to be very sensitive to small differences between monitors and it has been surprising how much of a difference it can make to look at a monitor from above in comparison to below or from the side in being able to pass these three tests.
This could be a very useful test for many purposes but especially to alert users as to the performance of a monitor which can vary considerably depending on whether or not it has “warmed up” or on the angle in which the images are reviewed.