Create New Account
Forgot Your Password?

Examination Development

Contents:

Cognitive Examinations

The National Registry of EMTs (NREMT) cognitive exam item development process follows an extensive process, that takes approximately one year to complete. The item development process is the same for all four levels of National EMS Certification, Emergency Medical Responder (EMR), Emergency Medical Technician (EMT), Advanced Emergency Medical Technician (AEMT) and Paramedic.

Computer based cognitive examinations consist of items drawn from the National Registry's item banks. NREMT computer based exams are constructed to ensure that each candidate receives a distribution of items from five categories: Airway/Oxygenation/Ventilation, Cardiology, Medical, Trauma, and Operations. Fifteen percent (15%) of the items in all categories cover pediatric emergency care, except Operations. The number of items from each category is determined by an examination test plan (also known as a blueprint) which has been approved by the NREMT Board of Directors.

The NREMT examination test plan is developed based upon the result of the most recent EMS Practice Analysis. The EMS Practice Analysis is conducted at five year intervals (1995, 1999, 2004 and 2009). To complete the Practice Analysis, the NREMT randomly surveys hundreds of practicing EMS providers. The individuals are asked to provide input on the important tasks of the job of an EMS provider. Importance is defined as a balance between frequency and potential of harm. A committee comprised of national experts reviews the result of the data and develops a test plan which was approved by the Board of Directors.

NREMT examinations are developed to measure the important aspects of pre-hospital care practice. Examination items are developed in relation to tasks identified in the practice analysis. The domain of practice that limits therapy addressed in an item is based upon National EMS Scope of Practice Model developed by the National Highway Traffic Safety Administration. EMS education programs are encouraged to review the current NREMT Practice Analysis when teaching courses and as a part of the final review of the abilities of students to properly deliver the tasks necessary for competent patient practice.

Item Development

Individual examination items are developed by members of the EMS community serving on Item Writing Committees convened by the NREMT. Item Writing Committees typically have 9 to 10 national EMS experts as members (physicians, state regulators, educators and providers). They meet over a three to five day period to review, rewrite, and reconstruct drafted items. Consensus by the committee must be gained so that each question is in direct reference to the tasks in the practice analysis; that the correct answer is the one and only correct answer; that each distracter option has some plausibility; and the answer can be found within commonly available EMS textbooks. Controversial questions are discarded and not placed within the pilot item pools. Items are also reviewed for the appropriate reading level and to ensure no bias exists related to race, gender or ethnicity.

Following completion of the item-writing phase, all items are pilot tested. Pilot items are administered to candidates during computer adaptive exams. To the candidates, pilot items are indistinguishable from scored items; however, they do not count for or against the candidate. Extensive analysis of the performance of the pilot items is conducted with those functioning properly, under high stakes pilot testing. When the item analysis is complete, the items are determined to be functioning properly and are psychometrically sound; they are placed in “live” item pools.

The NREMT conducts differential statistical analysis of items in pre-test item pools and live item pools on an annual basis. Panels are then convened to review the items that show differential statistics and decisions are made regarding maintenance of any items within the pools.

About the NREMT Cognitive Exam

Candidates seeking National EMS Certification at the Emergency Medical Responder, Emergency Medical Technician and Paramedic take Computer Adaptive Tests (often referred to as C-A-T or CAT.) An adaptive test is an algorithm-delivered exam. This means the computer is programmed to select items in a specific, logical manner. The decision regarding passing or failing an algorithm exam is the same as with pencil-paper examinations -has the candidate reached the level of entry-level competency (passed) or has the candidate not yet reached entry-level competency (failed)?

This same method is used to develop all NREMT test items used in CAT exams. First, an item (or question) is drafted. Then it is pilot tested in a high stakes atmosphere by being placed in CAT exam test pools. The test pool is a ‘bank’ of test questions that the computer can draw from when delivering an exam. Pilot items are placed in test pools to be calibrated, determining where on the scale of difficulty they will be placed. When the drafted item is being pilot tested, it does not count towards the pass/fail score of the candidate being examined. In order for an item to be placed in a “live” (when the items counts toward pass/fail) test pool, it must meet strict calibration requirements. The difficulty statistic of an item identifies the “ability” necessary to answer an item correctly. Some items require a low ability to answer correctly while others may require moderate or high level of ability.

The CAT Exam is Structured Differently than a Pencil-Paper Exam

Since CAT exams are delivered in a completely different manner than pencil-paper exams, they will “feel” more difficult. Candidates should not be concerned about the ability level of an item on the exam because their ability is being ‘measured’ in a different manner. This works by placing all items on a standard scale in order to identify where the candidate falls within the scale. As a result, candidates should answer all items to the best of their ability. Let’s use an example to explain this:

Suppose that a middle-school athlete is trying out to be a member of the high jump team of the track team. The coach, after many years of experience as a middle-school coach, knows that in order to score any points at a middle-school track meet, his individual jumpers need to jump over a bar placed at four feet above the ground. This is the “competency” standard. If he enters jumpers who can jump three feet, he knows these jumpers will rarely-- if ever--score points for his team during a track meet. Those who jump four feet on the first day of try-outs, after training and coaching, can not only jump four feet (the minimum) but; later may, through additional education, learn to jump five or more feet. The coach knows that it will be worth his time and effort to coach these try-out jumpers to greater heights. Therefore, he tells those who jump over four feet at try-outs that they are members of the high jump team (because they have met the entry-level or competency standard).

Since the coach knows the competency standard, he can hold a try-out to see who meets the entry-level competency. The coach will likely set the bar at or near 3 feet 6 inches for the first jump attempt. Those who make it over this bar will then progress to perhaps 3 feet 9 inches to test their ability at that height. After a group has passed 3 feet 9 inches the coach will again raise the bar to 4 feet and have the successful jumpers attempt to clear it., A smart coach will likely not tell the team the necessary height so that he can learn the maximum ability of each try-out jumper. At the 4 foot level, the coach may find that seven of ten athletes clear the bar. He will then raise the bar to 4 feet 3 inches and later to 4 feet 6 inches. He will increase the height of the bar until he determines the maximum individual ability of each try-out jumper. If he has four slots on his team, he will select the top four or five jumpers and begin the coaching process to help them reach even greater heights. In this manner, the coach has learned about the ability of the try-out jumpers based upon a standard scale (feet and inches). The coach then sets a standard (4 feet) for membership on the team, based upon his knowledge of what is necessary to score points at track meets (the competency standard).

CAT Exams Are Different for Every Candidate

The above high-jump illustration can describe the way a CAT exam works. Every item within a live item pool has been calibrated to determine its level of difficulty. As each candidate takes an exam, the computer adaptive test must learn the ability level of the candidate. Here is how it works: The test typically starts with an item being administered that is slightly below the passing standard. The item may be from any subject area in the test plan (Airway/Oxygenation/Ventilation, Cardiology, Medical, Trauma, and Operations). After the candidate gets a short series of items correct, the computer will choose items of a higher ability, perhaps near entry-level competency. These items will also be taken from a variety of content areas of the test plan. If the candidate answers most of the questions in this series of items correctly, then the computer will choose new items that are at a higher ability level. Again, if the candidate answers many of these items correctly the computer will again present the candidate with items of an even higher ability level. Eventually, every candidate will reach his or her maximum ability level. The computer then determines whether or not the individual is above the standard (entry-level competency) in these content areas, and the examination ends.

95% Confidence is Necessary to Pass or Fail a CAT Exam

The high achiever who is able to answer most of the questions correctly will find that the computer ends the exam. A candidate may worry that something is wrong because the exam was so short, when in reality, the computer was able to determine that the candidate jumped far higher than the standard level—or was well above the level of competency In a CAT exam.

The computer stops the exam when it is 95% confident that the individual candidate has reached the level of competency, is 95% confident the individual cannot reach the level of competency, or has reached the maximum allotted time. Thus, the length of a CAT exam is variable. Sometimes a candidate can demonstrate a level of competency in as few as 60 test items. Sometimes, after 60 questions, the candidate has shown to be close to entry-level competency but the computer has not determined within the 95% confidence requirement that the candidate is either above or below the entry-level competency standard. In cases when the computer is not 95% confident, the test continues to provide additional items. Each additional test item provides more information to determine whether or not a candidate meets the entry-level of competency. Regardless of the length of the test, items will still vary over the content domain (Airway/Oxygenation/Ventilation, Cardiology, Medical, Trauma, and Operations).

When (and if) the candidate reaches the maximum length of an examination, the ability estimate of that candidate will be most precise. Using the high jumper example, the computer will be able to determine those who jump 3 feet 11 inches from those who jump 4 feet 1 inch. Those who clear 4 feet more times than they miss 4 feet will pass. Those who jump 3 feet 11 inches but fail to clear 4 feet enough times will fail and are required to repeat the test. Some candidates won’t even be able to jump close to four feet. These candidates are below or well below the entry-level of competency. This too can be determined fairly quickly, and these candidates may have their examination ended quickly. When the examination is near 70 questions and a candidate fails, he or she has demonstrated within 95% confidence that he or she cannot reach the entry-level of competency.

In a CAT exam, it is important that the candidate needs to answer every question to the best of their ability. The CAT exam provides the candidate with more than adequate opportunity to demonstrate their ability and is able to provide precision, efficiency, and confidence that a successful candidate meets the definition of entry-level competency and can be a Nationally Certified EMS provider.

Examination Results

Exam results are posted on the NREMT’s password-secure website through an individual’s login account, typically the next day. Those candidates who pass the exam are sent National EMS Certification credentials by the NREMT. Candidates who successfully demonstrate entry-level competency do not receive specific details regarding their examination results, as it is not necessary.

Candidates who fail to meet entry-level competency are provided information regarding their testing experience. This information is useful for identifying areas to concentrate future studies in preparation for their next attempt. The information indicates if a candidate is “above,” “near,” or “below,” the level of entry-level competency in the various content areas. Candidates who are “above” the standard can be somewhat confident they have sufficient knowledge in that content area, allowing them to pass the exam. However, failure to review the material in that content area can result in failing the exam again. Candidates who are “near” the standard can be slightly above or slightly below the standard and should certainly study these areas. Being “near” the passing standard does not indicate pass or fail, rather it can be interpreted as an area to study. Candidates who are “below” the standard need to enhance their study in this area. Candidates who fail the examination will have all items on the failed exam “masked.” This means a masked item will not appear on any future exams taken by that candidate.

Studying examination items to prepare to do the job of an EMT is not helpful. Studying the tasks and the job of an EMT provides the best preparation. Candidates who memorize items in hopes of “getting them right,” the next time are wasting their time because masking items prevents them from seeing the same item again.

A CAT examination is very precise in determining a candidate’s level of competency. Candidates who fail the exam and do not study for their next attempt will most likely be measured at the same level as when they took the exam the first time. Failing candidates who do not change their ability level (be able to jump higher) will again be measured the same. The best way to improve ability is to practice—in this case, study.

The NREMT produced a DVD with videos, which explains the purpose of the NREMT, how computer adaptive testing works and how to register for the examination. All candidates are urged to watch the videos (now available on our website).

View Videos

Purpose of the NREMT

Learn More about Computer Based Testing

Step-by-Step Instructions for Applying Online for the NREMT Test

(Requires high speed Internet connection)

Return to Top