MEPS Annual Methodology Report 2022

Deliverable Number: 121D.102
Contract Number:75Q80120D00024
June 30, 2023

Authors
Westat
Westat Reference Number: 2-7-679
Final

Submitted to:
Agency for Healthcare Research and Quality
Center for Financing, Access, and Cost Trends
560 Fishers Lane
Rockville, MD 20850

Submitted by:
Westat
An Employee-Owned Research Corporation®
1600 Research Boulevard
Rockville, Maryland 20850-3129
(301) 251-1500

Table of Contents

Introduction
1. Sample
1.1 Sample Composition
1.2 Sample Delivery and Processing
2. Instrument and Materials Design
2.1 Introduction
2.2 Changes to CAPI Instrument for 2022
2.3 Testing of the Questionnaire and Interviewer Management System
2.4 Changes to Materials and Procedures for 2022
3. Recruiting and Training
3.1 Field Interviewer Recruiting for 2022
3.2 2022 Interviewer Training
3.2.1 Experienced Interviewer Training
3.2.2 Continuing Education for All Interviewers
4. Data Collection
4.1 Data Collection Procedures
4.2 Data Collection Results: Interviewing
4.3 Data Collection Results: Authorization Form Signing Rates
4.4 Data Collection Results: Self-Administered Questionnaire (SAQ), Diabetes Care Supplement (DCS), and Collection Rates
4.5 Quality Control
4.6 Security Incidents
5. Home Office Support of Field Activities
5.1 Preparation for Field Activities
5.2 Support During Data Collection
6. Data Processing and Data Delivery
6.1 Processing to Support Data Delivery
6.1.1 Schedules for Data Delivery
6.1.2 Data Quality Control System
6.1.3 Transformation
6.1.4 TeleForm/Data Editing of Scanned Forms
6.1.5 Coding
6.2 Data Delivery
6.2.1 Variable Construction
6.2.2 File Deliveries
Appendix A
1-1 Initial MEPS sample size (RUs) and number of NHIS PSUs, all panels
1-2 Data collection periods and starting RU-level sample sizes, spring 2018 through fall 2022
1-3 Percentage of NHIS households with partially completed interviews in Panels 4 to 27
1-4 Distribution of Panel 27 sampled RUs by sample domain
2-1 Authorization form methods: Summary and benefits
2-2 Supplements to the CAPI core questionnaire (including hard-copy materials) for 2022
3-1 Staffing for spring field period, 2018–2022
3-2 Spring attrition rate among new and experienced interviewers, 2018–2022
3-3 Fall attrition rate among new and experienced interviewers, 2018–2022
3-4 Annual attrition rate among new and experienced interviewers, 2018–2022
4-1 Data collection schedule and number of weeks per round of data collection, 2022
4-2 Case potential categories for classifying and prioritizing case work, spring 2022
4-3 MEPS-HC data collection results, Panels 21 through 27*
4-4 Response rates by data collection year, 2013–2022
4-5 Completed cases by mode of interviewing for Panels 23 through 27
4-6 Summary of MEPS Round 1 response and nonresponse, 2017–2022 panels
4-7 Summary of MEPS Round 1 response, 2017–2022 panels, by NHIS completion status
4-8 Summary of MEPS Panel 27 Round 1 response rates, by sample domain by NHIS completion status
4-9 Summary of MEPS Panel 27 Round 1 response rates, per interview mode, by sample domain by NHIS completion status
4-10 Summary of MEPS Round 1 results for RUs who ever refused, Panels 21 through 27
4-11 Summary of MEPS Round 1 results for RUs who were ever traced, Panels 21 through 27
4-12 Interview timing comparison, Panels 21 through 27 (mean minutes per interview, single-session interviews)
4-13 Interview timing comparison by interview mode for Panels 23 through 27 (mean minutes per interview, single-session interviews)
4-14 Mean contact attempts by NHIS completion status and interview mode, Round 1 of Panels 25 through 27
4-15 Signing rates for medical provider authorization forms for Panels 20 through 27
4-16 Signing rates for pharmacy authorization forms for Panels 20 through 27
4-17 Results of Self-Administered Questionnaire (SAQ) collection for Panels 21 through 27
4-18 Results of Diabetes Care Supplement (DCS) collection for Panels 19 through 26
5-1 Number and percent of respondents who called the respondent information line, 2018–2022
5-2 Calls to the respondent information line, 2021 and 2022
6-1 2022 cases with comments or data check issues
6-2 Total number of comments by category
A-1 Data collection periods and starting RU-level sample sizes, all panels
A-2 MEPS household survey data collection results, all panels*
A-3 Response rates by data collection year
A-4 Summary of MEPS Round 1 response and nonresponse
A-5 Summary of Round 1 response by NHIS completion status
A-6 Summary of MEPS Round 1 results for all RUs who ever refused
A-7 Summary of MEPS Round 1 results for RUs who were ever traced, Panels 15-27
A-8 Interview timing comparison (mean minutes per interview, single-session interviews)
A-9 Mean contact attempts by NHIS completion status, Round 1
A-10 Signing rates for medical provider authorization forms
A-11 Signing rates for pharmacy authorization forms
A-12 Results of Self-Administered Questionnaire (SAQ) collection*
A-13 Results of Diabetes Care Supplement (DCS) collection*
A-14 Results of patient profile collection
A-15 Calls to respondent information line
A-16 Files delivered during 2022
Figure 6-1 Blaise to Dex transformation

Introduction

The Household Component of the Medical Expenditure Panel Survey (MEPS-HC, Contract 290- 2016-00004I, awarded July 1, 2016, and Contract 75Q80120D00024, awarded July 13, 2020) is the central component of the long-term research effort sponsored by the Agency for Healthcare Research and Quality (AHRQ) to provide timely and accurate data on access to, use of, and payments for healthcare services by the U.S. civilian non-institutionalized population. The project has been in operation since 1996, each year producing a series of annual estimates of health insurance coverage, healthcare utilization, and healthcare expenditures. This report documents the principal design, training, data collection, and data processing activities of the MEPS-HC for survey year 2022.

Data are collected for the MEPS-HC through a series of overlapping household panels. Each year a new panel is enrolled for a series of five in-person interviews conducted over a 2½-year period.

Panels 23 and 24, however, have been extended to nine interviews conducted over 4½ years, as described in the section below on changes due to COVID-19. This report describes work performed for all of the panels active during calendar year 2022. Data collection operations in 2022 were for Panel 23, Round 9; Panel 24, Rounds 7 and 8; Panel 25, Round 5; Panel 26, Rounds 3 and 4; and Panel 27, Rounds 1 and 2. Data processing activity focused on delivery of full-year utilization and expenditure files for calendar year 2020.

The report touches lightly on procedures and operations that remained unchanged from prior years, focusing primarily on the results of the 2022 operations and features of the project that were new, changed, or enhanced for 2022. Tables in the body of the text highlight the 2022 results, with limited comparison to prior years. A set of tables showing data collection results over the history of the project is included in the Appendix.

Chapter 1 of the report describes the 2022 sample and activities associated with preparing the sample for fielding. Chapters 2 through 5 discuss activities associated with the data collection for 2022: updates to the survey questionnaire and field procedures; field staff recruiting and training; data collection operations and results; and home office support of field activities. Chapter 6 describes data processing and data delivery activities.

Return To Table Of Contents

Changes Due to COVID-19

All MEPS Household Component (MEPS-HC) face-to-face interviewing ceased on March 17, 2020, due to the impact of COVID-19 on American life. Data collection switched to the telephone mode, and in 2020 and 2021 a mix of in-person and telephone interviewing was used, depending on the level of the COVID-19 pandemic. In 2022, MEPS added computer-assisted video interviewing (CAVI) as an alternative to telephone interviewing.

MEPS-HC continued several modifications to project systems, processes, and procedures begun in 2020 to respond to the pandemic and added several more to adapt to the ongoing pandemic. Please see the 2020 and 2021 methodology reports for additional details:

Extension of Panels 23 and 24. Anticipating the potential negative impacts of the COVID-19 pandemic on response rates and the number of households that would be included in 2020, 2021, and 2022 data, a decision was made to extend Panel 23 and Panel 24 through nine rounds. The extended panel rounds have been conducted primarily by telephone, with limited in-person interviewing conducted when safe for hard-to-reach or hearing-impaired respondents.

Virtual New Interviewer Training. In 2022 MEPS again trained new interviewers virtually through a blend of asynchronous home study modules and synchronous Zoom sessions. MEPS added a second new hire training in May to the usual January training to ensure sufficient staffing for the three main panels and the two extension panels.

Introduction of CAVI as an Alternative to Telephone. In 2022, MEPS interviews were conducted by three modes: in-person, CAVI, and telephone. Interviewers were given guidance throughout each field period about which modes were appropriate for their cases, and interview modes were closely monitored. CAVI offered the opportunity for interviewers and respondents to both see and hear each other, allow respondents to share images of records, and allow interviewers to display show card images to help respondents select a response. CAVI interviewing started in late spring 2022 but became pervasive in the fall, accounting for over 20 percent of completed interviews. CAVI was offered when respondents were unwilling to have an interviewer in the respondent’s home and for later round cases that had been completed by telephone in 2020 and 2021.

Electronic Authorization Forms. In 2022 MEPS began offering electronic methods for authorization forms (AFs). During in-person interviews, available household members signed on the interviewer’s laptop (using a process hereafter referred to as eSignature). For household members not available during the in-person interview, or for CAVI or telephone interviews, respondents were sent a link via email or text to sign forms in DocuSign. Paper AFs were still used when requested, or for household members unavailable and not eligible for DocuSign due to not providing an email address or cellphone number. Collecting electronic signatures provided considerable benefits to the project, most notably reducing burden to both respondents and interviewers, which resulted in a savings of approximately 6 minutes during the computer-assisted personal interviewing (CAPI) interview. Additional benefits included a shorter time span between collection of the signature and receipt and fewer errors on AFs that would otherwise make them unusable.

Return To Table Of Contents

1. Sample

Each year, a new, nationally representative sample for the Medical Expenditure Panel Survey Household Component (MEPS-HC) is drawn from among households responding to the previous year’s National Health Interview Survey (NHIS). Households in a new panel typically participate in a series of five interviews that collect data covering two full calendar years. For each calendar year the sample respondents from two panels—one completing its first year in the study (Round 3) and one completing its second year (Round 5)—are combined for analysis purposes, resulting in a series of annual estimation files. Beginning in 2020, with the onset of the COVID-19 pandemic, and continuing through 2022, there were concerns of declining response rates as well as challenges in recruiting respondents by telephone. To help maintain the ongoing sample, Panel 23 was extended for a third year of data collection in 2020 and a fourth year in 2021, and Panel 24 was extended for a third year in 2021 and fourth year in 2022.

The sample for the new MEPS panel in 2022, Panel 27, was selected from among households responding to the NHIS in the preceding year, where the NHIS sample was based on the NHIS sample design initially implemented in 2016 (as were Panels 22-26). Specifically, the MEPS household sample was randomly selected from among those that participated in the NHIS during the first three quarters of 2021 and who had been assigned to NHIS Panels 1 and 3, the NHIS panels designated for MEPS.

This chapter describes the 2022 MEPS sample drawn from 2021 NHIS-responding households as well as steps taken to prepare the new sample for fielding.

Return To Table Of Contents

1.1 Sample Composition

Table 1-1 shows the starting sample sizes in terms of the number of reporting units (RUs) for all MEPS panels through Panel 27 and the number of MEPS primary sampling units (PSUs) from which each panel was drawn. Note that the change in the number of PSUs for Panel 12 reflects the redesign of the NHIS sample implemented in 2006 (thus affecting MEPS in 2007), following the 2000 decennial census. The number of PSUs for Panel 27 is based on the number of PSUs associated with MEPS after the 2016 NHIS sample redesign, the sixth such MEPS Panel under this design. The reduction in the number of PSUs after Panel 22 stemmed from further modifications to the NHIS design. The MEPS sample units presented are RUs, each of which represents a set of related persons living together within the same NHIS-responding household selected for MEPS participation. Related members of the NHIS households sampled for MEPS who move as a unit during the MEPS data collection period (as well as separate individuals) form new RUs for interviewing purposes. Each new RU is followed over the course of the five MEPS data collection rounds and interviewed at their new address.

Table 1-1. Initial MEPS sample size (RUs) and number of NHIS PSUs, all panels

Panel Initial sample size (RUs)* MEPS PSUs*
1 10,799 195
2 6,461 195
3 5,410 195
4 7,103 100
5 5,533 100
6 11,026 195
7 8,339 195
8 8,706 195
9 8,939 195
10 8,748 195
11 9,654 195
12 7,467 183
13 9,939 183
14 9,899 183
15 8,968 183
16 10,417 183
17 9,931 183
18 9,950 183
19 9,970 183
20 10,854 183
21 9,851 183
22 9,835 168
23 9,960 143
24 9,976 139
25 10,008 139
26 9,674 150
27 9,700 150

* RUs: Reporting units; PSUs: Primary sampling units.

Return To Table Of Contents

MEPS data collection is conducted in two main fielding periods each year. Typically, during the January-June period, Round 1 of the new Panel and Rounds 3 and 5 of the two continuing Panels are fielded, with the Panel in Round 5 retiring at mid-year. Normally, during the July-December period, Round 2 of the new Panel and Round 4 of the remaining continuing Panel are fielded.

However, with the extension of Panels 23 and 24 beginning in 2020, additional Rounds were fielded: Round 7 and 9 in the January-June period, with the Panel in Round 9 retiring in mid-year, and Rounds 6 and 8 in the July-December period. Table 1-2 summarizes the combined workload for the January-June and July-December periods from spring 2018 through fall 2022.

Over the years shown in Table 1-2, the combined spring and fall workload has ranged from a low of 36,664 in 2019 to a high of 40,168 in 2021. Typically, the interviewing workload during the spring field period, when three Panels are active, is substantially larger than during the fall, when there are only two. In 2022, there were five active Panels in the spring field period and three in the fall field periods. The spring field period still had more cases, with 24,465 cases fielded, while the fall workload had 12,491 RUs, the lowest of the 5 years shown.

Table 1-2. Data collection periods and starting RU-level sample sizes, spring 2018 through fall 2022

Data collection period RU-level sample size*
January – June 2018 23,573
Panel 21 Round 5 6,842
Panel 22 Round 3 6,892
Panel 23 Round 1 9,839
July – December 2018 13,766
Panel 22 Round 4 6,726
Panel 23 Round 2 7,040
January – June 2019 23,261
Panel 22 Round 5 6,624
Panel 23 Round 3 6,773
Panel 24 Round 1 9,864
July – December 2019 13,403
Panel 23 Round 4 6,569
Panel 24 Round 2 6,834
January – June 2020 22,667
Panel 23 Round 5 6,413
Panel 24 Round 3 6,382
Panel 25 Round 1 9,872
July – December 2020 15,633
Panel 23 Round 6 5,264
Panel 24 Round 4 5,574
Panel 25 Round 2 4,795
January-June 2021 23,340
Panel 23 Round 7 4,624
Panel 24 Round 5 4,879
Panel 25 Round 3 4,328
Panel 26 Round 1 9,509
July-December 2021 16,828
Panel 23 Round 8 4,093
Panel 24 Round 6 4,048
Panel 25 Round 4 3,768
Panel 26 Round 2 4,919
January – June 2022 24,465
Panel 23 Round 9 3,673
Panel 24 Round 7 3,573
Panel 25 Round 5 3,339
Panel 26 Round 3 4,180
Panel 27 Round 1 9,700
July – December 2022 12,491
Panel 24 Round 8 3,174
Panel 26 Round 4 3,866
Panel 27 Round 2 5,451

* RU-level sample size for this table derived from field management system counts and operational reports detailing fielded sample.

Return To Table Of Contents

Each new MEPS panel includes some oversampling of population groups of particular analytic interest. Since 2010 (Panel 15), the set of sample domains has included oversamples of Asian, Black, and Hispanic populations. All households set aside in the NHIS for MEPS that have at least one household member in any of these three categories (Asian, Black, or Hispanic) are included in the MEPS sample with certainty. “White and other race” households have been partitioned into two sample domains and subsampled at varying rates across the years. These domains reflect whether an NHIS-responding household characterized as “White or other race” provided “complete” information at the household level for the NHIS or if only “partially complete” information was provided.

As background, the partitioning of the “White, other” domain into these two domains began in 2011 (Panel 16). The partial completes were sampled at a lower rate than the full completes in order to lessen the impact on the field effort resulting from the difficulty of gaining the cooperation of these households. The last two columns in Table 1-3 show the subsampling rates for the two groups since Panel 16. The partial completes in the “White, other” domain have been subsampled at rates ranging from a low of 40 percent (Panel 17) to a high of 80 percent (Panel 27). Table 1-4 shows the Panel 27 sample distribution by domain.

Table 1-3. Percentage of NHIS households with partially completed interviews in Panels 4 to 27
Panel Percentage with partially completed interviews Subsampling rate for
NHIS completes in
“White, other” domain*
Subsampling rate for partial completes in “White, other” domain
4 21
5 24
6 22
7 17
8 20
9 19
10 16
11 23
12 19
13 25
14 26
15 21
16 25 79 46
17 19 51 40
18 22 63 43
19 18 66 42
20 19 84 53
21 22 81 49
22 19 77 49
23 20 79 49
24 16 79 50

Return To Table Of Contents

Table 1-3. Percentage of NHIS households with partially completed interviews in Panels 4 to 27 (continued)

Panel Percentage with partially completed interviews Subsampling rate for NHIS completes in “White, other” domain* Subsampling rate for partial completes in “White, other” domain
25 11 77 50
**26 15
27 17 81 80

* The figures in the second column of the table are the proportion of partial completes in the total delivered sample, after subsampling. The figures in the third and fourth columns are subsampling rates applied to the two White/other subdomains in Panels 16 through 27.

**Note that Panel 26 rates were left blank due to subsampling being done by size of state rather than race/ethnicity domain.

Return To Table Of Contents

Table 1-4. Distribution of Panel 27 sampled RUs by sample domain

Sample domain Number Percent
Asian 764 7.88
Black 1,850 19.07
Hispanic 1,305 13.45
White, other 5,781 59.60
NHIS complete 4,977 51.31
NHIS partial complete 804 8.29
Total 9,700

Return To Table Of Contents

1.2 Sample Delivery and Processing

The 2022 MEPS sample was received from AHRQ and NCHS in three deliveries. The first delivery, containing households sampled from the first and second quarter of the 2021 NHIS, was received on September 10, 2021. Households selected from the third quarter of the NHIS were delivered on November 17, 2021.

The September delivery of the first majority of the new sample is instrumental to the project’s schedule for launching interviewing each year in early January. The partial file gives insight into the demographic and geographic distribution of the households in the new Panel. This information, when combined with information on older Panels continuing in the new year, guides project decisions on the number and location of new interviewers to recruit.

Upon receipt of the first portion of the 2022 sample, project staff also reviewed the NHIS sample file formats to identify any new variables or values and to make any necessary changes to the project programs that use the sample file information. Following this initial review, staff proceeded with the standard processing through which the NHIS households are reconfigured to conform to MEPS reporting unit definitions and prepared the files needed for advance mailouts and interviewer assignments. The early sample delivery also allows time for checking and updating NHIS addresses to improve the quality of the initial mailouts and to identify households that have moved since the NHIS interview.

Return To Table Of Contents

2. Instrument and Materials Design

2.1 Introduction

Each year, the project makes a number of changes to the instrument used to collect MEPS-HC data, as well as to the field procedures followed by the interviewers who collect the data. The notable changes made for 2022 are detailed in this chapter.

Return To Table Of Contents

2.2 Changes to the CAPI Instrument for 2022

The MEPS-HC CAPI instrument was modernized as part of a technology upgrade launched in spring 2018. For each data collection cycle since then, AHRQ and Westat have worked together to define a set of modifications to the CAPI instrument. Some modifications are new items or new sections, whereas others are updates or fixes to existing items.

For 2022, there was only one notable global change: adding a CAPI hot key (F7) to bring up an electronic version of the English show cards for interviewers to reference or read to their respondents. This change was intended to help improve telephone interview interactions.

Section-specific changes for the 2022 data collection period, both spring and fall, are summarized below.

Start/Restart (ST). The interview mode (in-person, telephone, or CAVI) is now recorded by the interviewer at the start of each interview session in the ST section of CAPI, instead of after the interview is completed in the RU Information Module. Collecting the mode at the beginning of each session allows more than one mode to be recorded when the interview is completed across multiple sessions. Additionally, the RF (Respondent Forms) section of CAPI uses the interview mode to provide tailored instructions regarding the collection of AFs; see below for more information.

Calendar (CA). In response to feedback from computer-assisted recorded interview (CARI) recordings, the Calendar section introduction text was moved to a separate screen prior to the records grid. This change encourages verbatim reading from interviewers.

Date Picker. To simplify training and the user interface of the date picker, the monthly recurrence options were eliminated. Paradata indicated that these options were rarely used. Additionally, the event type listed in the header of the date picker was changed from an acronym to a descriptive label (for example, Telehealth instead of TH) to remind interviewers to add only events of the same type at the date picker. This change was intended to reduce the opportunity for interviewer error. Finally, when the discharge date recorded at the hospital date picker is the same as that person’s reference period end date, a pop-up question confirms whether the RU member is still the hospital. The wording of the question used was revised to help prevent closing a still-in-hospital event in error.

Provider Look-up. A new “AHA” column was added to the provider look-up, indicating facilities that are members of the American Hospital Association (AHA). Interviewers are trained to select the AHA entry when they are having trouble distinguishing between multiple identical (or very similar) search results, after confirming all the relevant details. This should reduce search time for large facilities with many look-up entries. Additionally, a number of common pharmacy retail clinics were added to the provider look-up. Many pharmacy retail clinics have expanded their health care offerings, including vaccinations as well as the diagnosis or treatment of minor injuries and illnesses. This will ideally increase the share of events linked to a provider with an NPI ID.

Condition Look-up. To reduce the number of “Not Specified” or “Location Not Specified” entries that are selected, a new LOCATION probe was added to the condition look-up. For select entries where the location is not specified, the interviewers is prompted to use a standard follow-up probe about the location. The condition look-up was also updated with a small number of additional conditions.

Prescribed Medicine Look-up. In spring 2022, a prescribed medicine look-up was added to CAPI in order to increase data quality while reducing burden. From all prescribed medicine roster screens, interviewers can now search a list of over 2,000 prescribed medicines, including various strength and forms. There are options to select an entry directly from the look-up, edit an entry (for example, to modify the strength or form), or add a manual entry. The prescribed medicine look-up functions similarly to the other CAPI look-ups in that it uses a trigram search method. The look-up also formalizes the probing requirements for prescribed medicines and provides interviewers with common synonyms and acronyms.

Provider Probes (PP). After the first Provider Probe, the reference period is now optional text for all other questions in the series. This helps reduce burden and encourages verbatim reading. A fill that reads “other than what we’ve already talked about” was also added to reduce confusion or duplicate reporting of events by respondents.

Other Medical Expenses (OM). To accommodate alternate payment arrangements, the question about long-term medical equipment purchases was updated to include equipment rentals.

Charge/Payment (CP). To complement the change made in the OM section, questions about charges for long-term medical equipment were updated to also refer to rentals.

COVID-19 (CV). In response to the COVID-19 pandemic, a new section was added in 2021 to initially collect information about delays in care due to the pandemic, and later included questions about COVID-19 vaccination. For spring 2022, the delays in care questions were asked only of continuing panels through December 31, 2021. The COVID-19 vaccination series was also revised to add new question regarding booster shots. In fall 2022, the questions on delays in care due to the pandemic were entirely removed.

Employment (EM) and Related Sections. A few minor changes were made to the employment sections for spring 2022. One change was modifying the approach when a person reports health insurance coverage from both a job and a union. These people are now asked to pick whether the employer or union insurance is primary. Then in the health insurance section, only details about the primary insurance source are collected. This change was made to reduce the amount of time and resources spent on de-duplicating insurance coverage.

Another change was modifying the routing and wording for the question asking whether a job now provides health insurance (RJ80). The question universe now includes continuing jobs where the jobholder initially reports holding partial-round health insurance coverage. This change was made to prevent collecting extraneous or inaccurate data.

Health Insurance (HX) and Related Sections. Show card HX-2 (which displayed an example of each state-specific Medicaid card) was removed, as were callouts at related questions. The remaining HX show cards were renumbered to accommodate this change.

Another update was simplifying the Tricare response categories at all related items (HX125, HX260, PR280). Multiple military health care response options (Tricare Standard, Tricare Prime, and Tricare Extra) were collapsed into a single “Tricare” option. As Tricare plan names and benefits have changed over time, this change was made to simplify the questionnaire and reduce respondent burden.

To reduce interview administration time and burden, at HX130 the definition of Indian Health Service was moved to optional text.

New follow-up unfolding bracket questions were added in both the HX (HX702 and HX704) and OE (OE212 and OE214) sections to capture more detail about policy deductible amount. This change was made to improve annual deductible estimates.

Contacting Module (CM). In spring 2022, MEPS introduced the collection of electronic AFs. To facilitate this effort, a new section called the Contacting Module was added to the CAPI instrument. Most critically for AFs, the CM section collects an email address and cellphone number for each adult household member. This data enables MEPS to send emails and texts to RU members regarding DocuSign AFs.

A large portion of the Closing section was moved to this new CM section. This includes the collection of information to ensure that households can be reached for participation in future rounds, such as the best contact time, proxy information, a mailing address if it’s different from the locating address, a second home address, locating contact, alternate respondent, and plans to move.

In Fall 2022, a slight change was made to the CM section. Instead of asking the respondent if it is okay to text other RU members, we now ask if the cellphone owner is available to talk. If they are, they are directly asked for permission to send text messages to their cellphone.

Respondent Forms (RF). In Spring 2022, MEPS began to offer electronic methods for AFs to streamline the signature process for interviewers as well as signers. Significant changes were needed to the RF section to accommodate the two new signing methods (eSignature and DocuSign), in addition to continuing to offer the paper method. Collecting electronic signatures provided considerable benefits to the project, most notably reducing burden to both respondents and field interviewers, which resulted in a savings of approximately 6 minutes during the CAPI interview. Additional benefits include a shorter timespan between collection of the signature and receipt and fewer errors on AFs. Table 2-1 provides a summary of the three AF methods and their benefits.

Table 2-1. Authorization form methods: Summary and benefits

Method Summary Benefits
eSignature RU members available in-person at the time of the interview sign on the MEPS laptop screen using a stylus
  • RU members sign electronically; signatures are transmitted to MEPS HO with CAPI data
  • Do not need to prepare paper forms
  • No interviewer follow-up steps needed
DocuSign RU members not available during the interview receive a DocuSign link via email and/or text after the interview and sign securely using any computer, smartphone, or tablet
  • RU members not available during the interview can electronically sign
  • Do not need to prepare paper forms or arrange to pick up forms
  • DocuSign automatically sends reminder emails and texts; field interviewers can track status in management system
Paper Interviewer prepares blank paper form during interview; can be signed either during in-person interview or at a later time
  • Can be provided to signers outside the RU
  • Flexibility for RUs who cannot use electronic methods, or for unusual situations

Return To Table Of Contents

The RF section assigns the signing method based on interview mode and contact information availability. Within an RU, each person may be assigned the same method, or they all may be assigned different methods. After the methods are assigned, the RF section loops on each person to: (1) use the eSignature application, (2) explain the DocuSign invitations that will be sent after the interview is complete, or (3) prepare and complete the paper AFs.

A new eSignature application was specifically designed for completing MEPS AFs. It was integrated into CAPI and launches at the appropriate screen in the RF section, like the date picker or the provider look-up.

For fall 2022, some minor tweaks were made to the RF section and eSignature application based on lessons learned from the spring cycle. These included: enlarged signature boxes on the eSignature application screen; revised instructions for interviews conducted by telephone and CAVI (computer-assisted video interviewing); and more consistent screens for the eSignature and paper methods.

Closing (CL). In 2022, multiple changes were made to the Closing section to accommodate two new procedures: electronic AFs and debit card incentives. While contact information has traditionally been requested in the Closing section, it needed to be collected earlier in the interview so it could be used to determine each person’s appropriate AF signing method during the Respondent Forms section. As a result, most items from the CL section were moved to the new CM section previously described.

MEPS respondent incentives were updated from checks to debit cards, and the delivery of the incentive was moved from the CAPI instrument to the Interviewer Management System (IMS). As a result, multiple changes were made to the Closing section to update wording and remove screens related to preparing and delivering the checks. Additionally, the interviewer now records the interview language in the CL section, instead of the RU Information Module. This ensures the interview language is stored along with the CAPI data and is available immediately for post-collection tasks, such as sending DocuSign invitations.

Supplements to the CAPI Instrument

Table 2-2 shows the supplements for the rounds administered in calendar year 2022. The only notable change was to the Your Health and Your Opinions preventive care self-administered questionnaire (PSAQ). In 2020, the PSAQ was modified to include supplemental items on alcohol and drug use, as well as items on mental health counseling and treatment. The fall 2022 PSAQ retained much of this special content but eliminated items on exact number of days using drugs and alcohol and some of the items related to benefits of counseling and alternative counseling treatments. In their place, the PSAQ included select questions from the “Social and Health Experiences” questionnaire (known internally as the Social Determinants of Health or SDOH SAQ), which had been fielded in 2021. Questions selected were on topics not as well represented in the core MEPS questionnaire, including questions on exercise and financial stability.

Table 2-2. Supplements to the CAPI core questionnaire (including hard-copy materials) for 2022

Supplement Round 1
(Spring 2022)
Rounds 3, 5, 7, 9
(Spring 2022)
Rounds 2, 4, 8
(Fall 2022)
Child Health X
Access to Care X
Income X
Assets Rounds 5 and 9 only
Medical Provider Authorization Forms for HS, OP, and ER Events X X X
Medical Provider Authorization Forms for MV, TH, HH, and IC Events X X
Pharmacy Authorization Forms X X
Your Health and Health Opinions (SAQ/PSAQ) Rounds 2, 4, 8 follow-up X
Diabetes Care Supplement (DCS) X

Return To Table Of Contents

2.3 Testing of the Questionnaire and Interviewer Management System

Testing for the spring 2022 (Rounds 1/3/5/7/9) instrument was conducted between September and December 2021. Testing for the fall 2022 (Rounds 2/4/8) instrument was conducted between March and June 2022. Since 2018, many of the testing approaches and procedures used for the technical upgrade have been continued or adapted to maintain a comprehensive testing plan that supports the ongoing instrument development schedule.

CAPI instrument development and testing included multiple programming/testing iterations that each lasted several weeks. Testing was conducted by a mix of corporate testers, MEPS project staff, and trained programming staff. Project and systems staff performed all testing in close coordination with the design team. For each of the spring and fall instruments, AHRQ received an alpha delivery and conducted its own testing. The following month, AHRQ received a beta delivery and conducted additional testing.

The testing ensured that CAPI followed the design as intended and assessed whether the layout of the overall screen for a given question, and across questions, consistently met the requirements designed to minimize measurement error. Feature testing thoroughly tested all new features against specifications, including wording, text fills, legal and illegal responses, boundary conditions, and skip patterns. Testers validated every possible variation allowed by the specifications.

Both scripted and free-form testing were used throughout the development and testing process. A full suite of scripted test cases was defined by the design staff and analytic leads at Westat and is updated each cycle. These scripted test cases represent approximately 80 percent of the cases fielded, including common paths through the CAPI instrument across all panel rounds. The test script suite was executed through alpha and beta for the spring and fall testing cycles.

In contrast, free-form testing focused on design changes in the current instrument build and ensured that any reported instrument bugs had been fixed. Free-form testing was also utilized to ensure the stability of the CAPI data model and to evaluate the stored data in new or unusual situations. Testers routinely pushed array limits, used back-up, changed answers, and used break-off and restart cases to challenge performance boundaries.

Additional testing components, including enhanced integration testing and ad hoc/free-form testing, were also conducted. The enhanced integration testing allowed project staff to check electronic Face Sheet information, test the RU Information Module and the Interviewer Assignment Sheet (IAS), and make entries into the electronic record of calls and refusal evaluation form. The ad hoc testing component used information derived from actual cases to verify that all management information was brought forward correctly from previous rounds. Using actual case data also allowed staff to check uncommon paths through the MEPS instrument so that specific changes to the questionnaire could be thoroughly tested.

The spring 2022 development cycle also included extensive testing related to electronic AFs. This included unit and integrated testing of: revised screens and routing in the CAPI instrument; AF method assignment; the eSignature application; data including the AF array; the Basic Field Operating System (BFOS) AF module; receipt procedures; and DocuSign AFs, including the use of various devices to access and complete the forms.

Return To Table Of Contents

2.4 Changes to Materials and Procedures for 2022

The manuals and the materials for the 2022 field effort were updated as needed to reflect changes to the questionnaire and management systems. Below is a description of the key changes to the materials and procedures.

Instructional Manuals

The field interviewer procedures manual was updated to address changes in field procedures and updates to the Interviewer Management System (IMS).

A new AF manual was prepared that detailed the procedures related to AFs for all three signing methods. Additionally, a new MEPS Computer-Assisted Video Interviewing (CAVI) Operations Manual was developed to fully detail the guidelines for conducting MEPS interviews via this mode. Hard-copy versions of these supplementary manuals were provided to all interviewers during the spring 2022 cycle.

Electronic Materials

To help prepare for upcoming interviews, the electronic face sheet in the IMS provides interviewers with information needed to contact their assigned households and familiarize themselves with the composition of the household and relevant details about their prior history with the survey. In 2022, minor revisions were made to the Contacting Information tab in the Face Sheet to align with the revised collection of contact information in the CAPI instrument.

The IMS also contains an RU Information module for documenting operational information to help the next round’s interviewer effectively work each case, an RU Contact module for reporting address and telephone number changes identified prior to the CAPI interview, and the Interviewer Assignment Sheet (IAS), which supports follow-up for AFs and SAQs not completed at the time of the interview. The Authorization Form Log in the IAS was updated to allow for recording follow-up calls related to AFs. Changes were also made to the Current Round Contacting Information tab in the IAS, to align with the revised collection of contact information in the CAPI instrument.

To support the new debit card incentive procedures, a Respondent Payment module was added to the IMS.

Interviewers continued to be equipped with iPhones used for their MEPS work. When changes were made to the laptop IMS, the iPhone mFOS application generally had corresponding changes to match.

New for 2022 was the BFOS Authorization Form Module, used for helping interviewers with their follow-up efforts related to AFs. This module shows when forms are received by receipt control, and it is checked by interviewers prior to making follow-up calls.

Advance Contact and Other Case Materials

All respondent letters, monthly planners, and self-administered questionnaires were updated with the appropriate year references. Furthermore, the Informed Consent, Income Job Aid, Authorization Form Booklet, Record Keeper, and Records Job Aid were redesigned to match the refreshed materials look introduced in 2021.

There were multiple changes to materials related to the new electronic AF collection. A redesigned Authorization Form Booklet addresses the new electronic signing methods. Additionally, interviewers who conduct interviews in Spanish can refer to a new Spanish AF handout. This handout has a Spanish translation of the medical AF on one side and the pharmacy AF on the other. Finally, interviewers received multiple styluses used for signing via the eSignature application on the MEPS laptop.

The MEPSDocs.org website continued to be available to respondents to boost cooperation, ease legitimacy or COVID-19 concerns, and offer recordkeeping tools. In 2022, the Income Job Aid was added to the website. The MEPSDocs website also has links to the show cards in both English and Spanish. These electronic show cards are accessed by interviewers during CAVI interviews (using Zoom to display the show cards), as well as by respondents during telephone interviews.

Return To Table Of Contents

3. Recruiting and Training

3.1 Field Interviewer Recruiting for 2022

Overview. For spring 2022 data collection, MEPS attempted to recruit approximately 140 new interviewers across two recruiting periods to join the team of approximately 265 interviewers who were active on MEPS at the start of the 2022 data collection in early January. Our goal was to increase the team for spring data collection to about 400 interviewers.

To put the recruiting and attrition numbers into perspective, Table 3-1 summarizes the MEPS spring data collection staffing for the period of 2018-2022.

Table 3-1. Staffing for spring field period, 2018–2022

Data collection period Experienced interviewers staffed New interviewers staffed Total Interviewers for spring data collection
Spring 2018 345 75 420
Spring 2019 325 27 352
Spring 2020 269 121 390
Spring 2021 272 147* 419
Spring 2022 267 93** 360

Spring 2021 Attrition Staffing - *Note that the total of 147 includes the 36 Interviewers who were not trained until mid-June to shore up fall staffing.

Spring 2022 Attrition Staffing - **Note that the total of 93 new interviewers includes 18 interviewers who were trained mid-May to shore up the spring 2022 data collection staff.

Return To Table Of Contents

Recruiting Goals. Based on a projected sample size of approximately 26,000 RUs across the five panels to be fielded for spring 2022 and the likely number of experienced MEPS interviewers available at the end of fall 2021 data collection (about 265), including a MEPS travel team of 10 to 12 members, Westat estimated needing to recruit between 120 and 140 new interviewers for the standard staffing model. The goal was to start data collection with approximately 400 interviewers actively working during the spring 2022 data collection period.

Westat uses the Field Interviewer Recruitment Module (FIRM) software designed to manage the data collector recruiting process. This system works in conjunction with BrassRing, an online application system used to collect, track, and manage applications for all positions at Westat. The BrassRing system collects applications from both external (new to Westat) and internal (current or former Westat field data collectors) applicants.

The main recruiting of new field interviewers for 2022 began in late September 2021 and continued until the end of December 2021. Since it was likely that MEPS would continue to complete telephone interviews, at least early in the spring 2022 data collection period, MEPS posted for regular interviewers, telephone/traveling interviewers, and telephone-only interviewers to cast as wide a net as possible for new hires for spring 2022. Westat implemented a COVID vaccination mandate, effective January 2022. In anticipation of difficulties in staffing enough new interviewers during the main recruiting period, MEPS planned to do additional recruiting beginning in early March to have additional new interviewers ready to attend an attrition training in May to supplement the spring 2022 interviewing staff. Recruitment for the attrition training began in early March and ended in late April.

Recruiting Outcomes. During the main recruiting period, 104 candidates accepted job offers. However, with the COVID vaccine mandate that went into effect at the beginning of January 2022, 15 of these candidates were not cleared to work because of noncompliance with the mandate. Of the remaining 89 candidates, 83 of them started training and 75 completed the training. With the addition of these new trainees, the project began 2022 data collection with a total of 350 interviewers.

The goal was to add 50 more interviewers during the short attrition recruiting period. Note that MEPS only posted for in-person interviewers during this additional recruiting period since more of the data collection was transitioning back to in-person interviewing. However, only 28 candidates accepted job offers during this short recruiting period. Two of these candidates were not cleared to work for noncompliance with the COVID vaccine mandate. Of the remaining 26 candidates, 25 of them were expected at training and 18 of them completed the training.

Interviewer Attrition During 2022 Data Collection. During the spring data collection, 38 new interviewers and 32 experienced interviewers were lost to attrition. An additional 13 new interviewers and 25 experienced interviewers were lost during the fall round. Total attrition for the year was 29 percent, a rate more in line with the attrition level of 30 percent during the first year of the pandemic when data collection mode switched from in-person to telephone interviewing. In looking forward to 2023, MEPS plans to expand the interviewing staff so that we can begin data collection with close to 400 interviewers. The breakdown of 2022 interviewer attrition is shown in Tables 3-2 (spring), 3-3 (fall), and 3-4 (total).

Table 3-2. Spring attrition rate among new and experienced interviewers, 2018-2022

Data collection period New interviewers lost Experienced interviewers lost Total interviewers lost
# % # % # %
Spring 2018 26 34.7 33 9.6 59 14.0
Spring 2019 8 29.6 56 17.2 64 18.2
Spring 2020 39 32.2 54 20.1 93 23.8
Spring 2021 64 40.8 33 12.1 97 22.6
Spring 2022 38 36.2 32 12.0 70 18.8

Return To Table Of Contents

Table 3-2 shows the overall attrition rate during the spring data collection period from 2018 through 2022. Note that the total spring 2022 attrition rate of 18.8 percent is comparable to what MEPS experienced in spring 2019, the year before that pandemic hit and data collection mode changed. The new hire spring attrition rate remains high but has decreased slightly from 40.8 percent to 36.2 percent. In 2022, new interviewers were trained virtually, a factor that makes it much easier for a new hire to quit.

Table 3-3. Fall attrition rate among new and experienced interviewers, 2018-2022

Data collection period New interviewers lost Experienced interviewers lost Total interviewers lost
# % # % # %
Fall 2018 10 20.4 16 5.1 26 7.2
Fall 2019 4 21.0 20 7.4 24 8.3
Fall 2020 16 19.5 8 3.7 24 8.0
Fall 2021 30 31.6 27 11.3 57 17.1
Fall 2022 13 19.4 26 11.0 39 12.9

Return To Table Of Contents

Table 3-3 shows the overall attrition rate during the fall data collection period from 2018 through 2022. Note that the total fall 2022 attrition rate was 12.9 percent, a decrease from last year when the fall attrition rate was the highest in five years. However, the fall 2022 rate is still higher than the average 8 percent rate of the three prior years.

Table 3-4. Annual attrition rate among new and experienced interviewers, 2018-2022

Data collection period New interviewers lost Experienced interviewers lost Total interviewers lost
# % # % # %
2018 36 48.0 49 14.2 85 20.2
2019 12 44.4 76 23.4 88 25.0
2020 55 45.0 62 23.0 117 30.0
2021 94 58.6 60 22.1 152 35.4
2022 51 48.6 57 21.4 108 29.0

Return To Table Of Contents

The annual attrition rate for 2022 was 29 percent, a decrease of 6.4 percent from 2021 when the annual attrition rate was the highest rate in the past 5 years. The continued high rate of attrition among new hires is likely related to the continuation of the pandemic conditions, namely, a reliance on a high proportion of the interviewing being done by telephone and the virtual training format that has made it much easier for new hires to quit mid-training.

Return To Table Of Contents

3.2 2022 Interviewer Training

The overall structure for training new interviewers in 2022 was similar to the structure of the 2021 training to accommodate a remotely administered training due to the COVID-19 pandemic. It began with a home study, followed by a remote training conducted over Zoom for Government in late January 2022, and ending with completion of a two-part, post-classroom home study component. An attrition training was also conducted in May 2022.

Pre-Training Activities. This package included a project laptop, phone equipment, and an interactive self-paced workbook with exercises and online modules including videos and quizzes administered through Westat’s Learning Management System (LMS). The LMS generated regular reports, allowing home office and field management staff to monitor the completion of each trainee’s home study. New hires received their home study package early enough to complete the assignments before the remote training, but not so early that their introduction to important study concepts and project terminology would degrade before the remote training. The training added additional practice with the Zoom platform prior to the remote training.

Remote Training. The usual 8½-day training format included the weekend off to attend to asynchronous content that had not been completed and address personal needs that were impacted by the remote approach. Any synchronous content accommodated trainees from the East Coast to the West Coast; therefore, the training day hours were from 12 pm through 5:30 pm EST for synchronous content.

Training sessions used a “block” approach to the training, with each training day consisting of a block of synchronous training and a block of asynchronous training. Trainees had synchronous training for some portion of each training day. Trainees completed required asynchronous blocks prior to the corresponding synchronous blocks.

For the 8½ days of project-specific training, each trainee was assigned to one of six training classrooms (two for the May attrition training) staffed by a primary and support trainer, one or two classroom runners, and a Zoom host. The selection of trainers for the 2022 new hire training was based on several criteria including experience training with the CAPI instrument, overall project knowledge, and prior training experience. Prior to remote training, all training and support staff received a training on the remote platform; the associated technologies; and the content, activities, and procedures associated with remote training.

The training sessions used a variety of formats for presenting material, including lecture, question-and-answer interactions, written exercises, group discussion of problems and resolutions, and activities in which trainees were required to seek answers by consulting project resource materials. In addition, full and “mini” mock interviews (or “mocks”) and dyad role-plays were used throughout the training, and they were central to training on both the mechanics and substance of the CAPI instrument.

Mocks are scripted interviews usually led by a classroom trainer who serves as both trainer and “respondent” while trainees take turns as the interviewer. Full mocks present the entire interview from Re-enumeration through Closing, while a “mini” mock relies on preloaded data to allow the training to begin at the desired questionnaire section. For the remote training, the mocks were delivered in one of three ways: demonstration, simulation, and teleconference.

Mock 1 (Round 1) was demonstrated in a synchronous session, with trainers displaying the CAPI screens and trainees reading the questions from the screen and calling out the appropriate keyboard response to the questions.

Mock 2 (Round 3) was posted on the LMS as an interactive CAPI simulation, with respondent answers coded into the simulation. Although the simulation looked and behaved like the CAPI instrument, corrective feedback was given immediately when the trainee coded incorrectly.

Mock 3 (Round 5) was administered via teleconference call led by an experienced trainer with additional support for troubleshooting. The mock was altered to begin in the Calendar section to allow for completion of the interview. The teleconference allowed for additional hands-on CAPI practice for trainees and gave the trainer the opportunity to evaluate trainee performance.

Mini-mocks and materials on the IMS were presented in one of three modes: synchronous training in the virtual classroom, CAPI simulation hosted on the LMS, and independent practice from hard-copy materials to allow for hands-on CAPI/IMS practice.

Dyads paired trainees in a virtual breakout room to conduct an interview with one trainee playing the role of interviewer, and the other using a script to play the respondent. Each dyad pair was observed by a dyad observer, either a field supervisor or other training staff. Dyads are an effective tool for reinforcing questionnaire concepts and building interviewer confidence in administering the instrument. They also provide trainers with an opportunity to assess each trainee’s interviewing skills and mastery of the questionnaire application.

The remote training component maintained the emphasis on interviewer behaviors and interviewing techniques that facilitate complete and accurate reporting. Trainers were instructed to reinforce good interviewing behaviors during mock interviews. Good interviewing behaviors include reading questions verbatim, training respondents to use records to aid recall, actively engaging respondents in the use of show cards, and using active listening and probing skills. Trainers called attention to instances in which interviewers demonstrated such behaviors. To enhance trainee awareness of behaviors that affect data quality, dyad scripts included instructions to take a “time-out” at certain items in the interview to highlight relevant data quality issues.

In the past, scripted lab material had been provided to trainers and trainees for in-person lab practice. Often, trainees who wanted additional CAPI practice would take the scripts with them to work on independently. For the remote training, Westat offered some hard-copy scripted materials to all trainees as required independent practice. Additional support was provided as follows:

  1. Westat offered “office hours” for trainees to connect by video with experienced MEPS staff who could answer questions and address concerns.

  2. Similar to in-person labs, Westat had a sign-up method (CVENT) for trainees to attend sessions for targeted review of concepts. Westat had the trainee share the screen for trainers to watch. Since the majority of help for trainees during the remote session labs was one-on-one practice, rather than using scripted materials, trainers spoke with the training team lead as well as the trainee themselves to get a feel for where extra practice was needed. The trainer then customized the one-on-one instruction to meet the needs of the trainee.

  3. When a trainer or field management staff identified a trainee as needing one-on-one help, a member of the training floater team was assigned to work with the trainee.

Seventy-five new hires successfully completed the main training, and 18 successfully completed the attrition training.

Bilingual training followed a similar format to in-person training. Bilingual trainees participated in a 4-hour block of training on the last half-day of training. Trainees completed a Round 3 dyad in Spanish. The same format for dyads used in the main training was applied to bilingual training. Trainees divided into breakout rooms to complete the dyad with training staff visiting the breakout rooms to ensure good interviewing behaviors and an understanding of the CAPI instrument. Additionally, trainees used the breakout room approach to practice refusal conversion in Spanish. Three new interviewers successfully completed 2022 bilingual training and four new interviewers completed the bilingual attrition training.

Post-Remote Training Activities. The post-classroom home study was administered in two parts for the main training and combined into one part for the attrition training (to allow trainees to complete the home study prior to launch of the fall rounds). The first component was distributed on the last day of remote training, and new interviewers had to have successfully completed it before beginning fieldwork. It contained an interactive exercise in BFOS Secure Messaging (BSM) and completion of a mini-mock with a proxy respondent.

The home study also included a memo from the field director reviewing trainees’ tasks in preparation to interview, and it provided an “early work period” documentation form to assist them in setting up a work plan with their supervisor and completing tasks in a timely manner. At the same time, all field supervisors received a memo from the field director outlining their role in the post-classroom training through the setting of clear expectations, support, and ongoing training to their interviewers.

In addition to the home study, field supervisors engaged in additional post-training activities with new hires. New hires sat in on the report call of an experienced field interviewer and also reviewed assigned cases to report to their supervisor the best contact strategy for each. Field managers and field supervisors coordinated and implemented a mentoring/buddy plan that paired new hires with experienced field interviewers.

The new interviewers received the second component of the post-classroom home study about 6 weeks after the remote training. This component included both hard-copy materials as well as modules in the electronic LMS. This last component provided interviewers with additional training on respondent cooperation and participation in record-keeping activities. It also provided training on several important Re-enumeration topics and student RUs, and it reinforced interviewer practices related to collecting quality data.

Return To Table Of Contents

3.2.1 Experienced Interviewer Training

Spring 2022 Round 1/3/5/7/9 Home Study. The Round 1/3/5/7/9 home study in December 2021 followed established formats but was further expanded to accommodate the introduction of the prescribed medicine look-up, new procedures and applications for AF collection (including e-signature and DocuSign), updated COVID-19 procedures, and changes to the IAS and mFOS and the extended panels. The 6-hour self-paced program contained an instructional memo, electronic AF video demonstration, independent CAPI practice, iPhone training, and a quiz.

CAVI Virtual Training. In spring 2022, all MEPS interviewers were trained in groups over 14 sessions between January and May. New interviewers hired in January 2022 and May 2022 were trained following the new hire training, and the rest of the field staff completed CAVI training starting in February until April. Approximately 314 interviewers completed the training; however, due to attrition, a final count of 299 interviewers, 25 field supervisors and 4 field managers were trained on CAVI. Each session consisted of a 3-day hybrid training, with synchronous sessions and asynchronous self-paced modules. The total training time commitment was 8-10 hours, which included all asynchronous assignments and the post-training mock interview they were required to complete before they could start offering CAVI as a mode to respondents.

Training topics included:

To support ongoing training, all training videos were posted to the LMS to allow interviewers to rewatch as necessary.

In-Person Refresher Training. Due to the COVID-19 pandemic, the refresher training scheduled for April 2022 was canceled.

Return To Table Of Contents

3.2.2 Continuing Education for All Interviewers

Fall 2022 Round 2/4/6/8 Home Study. The Round 2/4/6/8 home study in July 2022 followed established formats. The 2-hour self-paced program contained an instructional memo, example materials, and a quiz. Topics included the extension of the rounds in response to the COVID-19 pandemic, the return to in-person interviewing in select areas, additional training on CAVI interviewing, and additional training on AF collection. New interviewers hired in the spring were required to complete a mock interview with their supervisor, field manager, or designated senior interviewer before beginning the fall rounds of data collection.

Weekly Newsletter. In 2022, MEPS continued offering its field interviewer newsletter in a weekly format. The newsletter allows for additional training opportunities in a concise format and the ability to deliver content as needed to the field. Topics included CAPI questionnaire topics, procedural content, and answers to field interviewer questions.

Return To Table Of Contents

4. Data Collection

This chapter describes the MEPS-HC data collection operations and provides selected results for the eight rounds of MEPS-HC interviewing conducted in 2022. Selected comparisons to results of prior years are also presented. Tables showing results for all years of the study are provided in the appendix.

Return To Table Of Contents

4.1 Data Collection Procedures

MEPS data collection management relies on a set of interrelated systems and procedures designed to accomplish three goals: efficiency, data quality, and cost containment. The systems include the BFOS, which facilitates case management through case assignment, case status and hours reporting, data quality reporting, and interviewer efficiency. Related systems include the CARI system and the Efficiency Analysis through Geospatial Location Evaluation (EAGLE) GPS validation module. The CARI system allows for review of recordings for selected interview items to assist in the assessment of interviewer performance and question assessment. The EAGLE system evaluates the location of an interviewer relative to a respondent’s home and attempts to verify the interviewer was at the residence for the duration of the interview to help validate the interview took place. These tools, along with the implementation of models designed to identify cases with a higher propensity for completion, as well as on-hold procedures designed to prevent the overwork of cases in the field, form a comprehensive framework for the management of MEPS data collection.

The field continues to monitor COVID-19 levels and use high-filtration masks as well as other mitigation procedures in areas of high transmission.

As in prior years, respondent contact materials provided respondents with the link to the MEPS website (www.meps.ahrq.gov); a toll-free number to Alex Scott, a study representative at Westat; and the link to the Westat website (www.westat.com externallink). Calls received from the Alex Scott line were logged into the call-tracking system and the appropriate supervisor notified so that he/she could take the proper course of action.

The advance contact calls to Panel 27 Round 1 households were made by a subset of the experienced MEPS interviewers.

Typically, for Round 1 households, interviewers are instructed, with a few exceptions, to make initial contact with the household in-person. For later rounds, interviewers are allowed to make initial contacts to set appointments by telephone, so long as the household had been cooperative in prior rounds.

In 2022, MEPS interviews were conducted in three modes: in-person, CAVI , and telephone. Interviewers were given guidance throughout each field period about which modes were appropriate for their cases, and interview modes were closely monitored. CAVI interviews are conducted via Zoom meetings hosted by the interviewer. Both interviewer and respondent are visible and audible to one another, can share images of records, and can share show card images to allow respondents to select a response. CAVI interviewing started in late spring 2022 but became pervasive in the fall, accounting for over 20 percent of completed interviews. Later-round cases were specifically targeted for CAVI interviews; however, these were permissible for Round 1 cases after initial contact. Interviewers typically offered CAVI when respondents were unwilling to have an interviewer in the respondent’s home.

In 2022, electronic AF collection was implemented. The two new electronic methods for completing MEPS AFs (eSignature and DocuSign) are further described in Chapter 2. The AF procedures varied based on the interview mode and household contact information provided to MEPS. During in-person interviews, available household members signed on the interviewer’s laptop (eSignature). For household members not available during the in-person interview, or for CAVI or telephone interviews, respondents were sent a link via email or text to sign forms in DocuSign. Paper AFs were still used when requested or for household members unavailable and not eligible for DocuSign due to not providing an email address or cellphone number.

The interview follow-up procedures also varied by mode. For CAVI and telephone interviews, any paper AFs and self-administered questionnaires (SAQs) were mailed by the interviewer shortly after the interview was completed. Pick-up of the forms was arranged, or a business reply envelope (BRE was enclosed for returning the forms directly to the home office. Anytime there were forms requested and not collected during the interview, the interviewer made up to three follow-up calls to ensure DocuSign AFs were signed and/or paper forms were completed and returned.

MEPS field managers, field directors, and the task leader for field operations continued to manage the field data collection in collaboration with the field supervisors, reinforcing the importance of balancing data quality with production and cost goals across regions. Field staff referred to this collaborative effort as the “No Region Left Behind” approach.

Throughout the year Westat continued to review data for all respondents reported to have been institutionalized in order to identify any individuals who might have been inappropriately classified and, as a result, treated as out of scope for MEPS data collection.

Data Collection Schedule. The sequence for beginning the spring rounds of data collection, most recently adjusted in 2014, was maintained for the spring round of 2022. Data collection began with Rounds 5, 7, and 9, followed by Round 3, and then Round 1. For the Round 1 respondents, the later starting date allowed several additional weeks of elapsed time in which respondents could experience healthcare events to report in their Round 1 interview, with these additional events giving them a more realistic understanding of what to expect in the subsequent rounds of the study.

The field period dates for the eight rounds conducted in 2022 are shown in Table 4-1.

Return To Table Of Contents

Table 4-1. Data collection schedule and number of weeks per round of data collection, 2022

Round Dates No. of weeks in round
1 January 24-July 14 24
2 July 28-December 7 19
3 January 17-June 15 21
4 July 21-December 7 20
5 January 10-May 15 18
7 January 10-May 15 18
8 July 21-December 7 20
9 January 10-May 15 18

Return To Table Of Contents

Data Quality (DQ) Monitoring. The MEPS DQ field monitoring system and procedures allowed supervisors and field managers to identify interviewers whose work deviated from quality standards and who might need additional coaching on methods for getting respondents to more completely report their healthcare events. CARI review was further integrated into weekly monitoring activities with supervisors listening to portions of roughly 1,000 interviews per field period from across all interview modes. These reviews were used to reinforce positive interviewing behaviors and techniques; in addition, listening to CARI gave field supervisors direct exposure to interviewing behaviors that needed to be addressed. In some cases, CARI recording results were such that interviewers were instructed to stop working until they could receive some retraining, including administering a practice interview to their field supervisor.

Case Potential Listing. The project continued the use of a model predicting a completed interview from a given case (“propensity to complete”) relative to other pending cases in a region. The model is designed to identify cases with a high likelihood of completion at that point in the field period relative to other pending cases. The model is dynamic and is updated weekly based on the specific conditions for pending cases at that time. The model was tested in 2019 to determine if updates were necessary to better fit the data; however, the existing model remains well-suited to current interview conditions and remains in effect even for telephone interviews.

Information from this model is integrated into BFOS (the system used for case management), providing propensity to complete as part of a comprehensive view of a case for a given week. Supervisors were to instruct interviewers—in the absence of other field information that would dictate otherwise—to attempt these cases during the next production week. Table 4-2 illustrates the potential categories used to classify cases on a weekly basis to promote field efficiency.

Table 4-2. Case potential categories for classifying and prioritizing case work, spring 2022

Potential categories for pending MEPS cases
High potential (unworked)
High potential (worked)
Appointment
Low potential
Low potential refusal
Remainder
Locating

Return To Table Of Contents

4.2 Data Collection Results: Interviewing

Table 4-3 provides an overview of the data collection results for Panels 21 through 27, showing sample sizes, average interviewer hours per completed interview, and response rates. Table 4-4 shows the final response rates a second time, reformatted to facilitate by-round comparisons across panels and years. In addition to the main panel rounds, both tables display the extended panel round data for Panels 23 and 24.

Of the data collection rounds conducted in 2022, the response rates showed at least a slight increase from 2021 but still lower than prior to 2020. While response rates have not returned to pre-pandemic levels despite a return to in-person interviews, they have begun to rebound. Hours per complete are now higher than pre-pandemic for Round 1, exceeding 13 hours.

Table 4-3. MEPS HC data collection results, panels 21 through 27*

Panel/Round Original sample Split cases (movers) Student cases Out-of-scope cases Net sample Completes Average interviewer hours/complete Response rate (%) Response rate goal
Panel 21 Round 1 9,851 462 92 89 10,316 7,674 5.9 74.4 80
Round 2 7,661 207 32 17 7,883 7,327 8.5 92.9 95
Round 3 7,327 166 14 19 7,488 7,043 7.2 94.1 96
Round 4 7,025 119 14 20 7,138 6,907 7.0 96.8 97
Round 5 6,914 42 8 34 6,930 6,778 5.9 97.8 98
Panel 22 Round 1 9,835 352 68 86 10,169 7,381 12.8 72.6 80
Round 2 7,371 166 19 11 7,545 7,039 8.5 93.3 95
Round 3 7,071 100 12 19 7,164 6,808 6.7 95.0 96
Round 4 6,815 91 13 18 6,901 6,672 6.8 96.7 97
Round 5 6,670 35 7 12 6,700 6,584 5.3 98.3 98
Panel 23 Round 1 9,960 193 46 110 10,089 7,351 12.5 72.9 80
Round 2 7,387 106 14 15 7,492 6,960 8.2 92.9 95
Round 3 6,987 102 11 18 7,082 6,703 6.1 94.6 96
Round 4 6,704 74 10 12 6,776 6,522 6.6 96.2 97
Round 5 6,503 34 4 5 6,536 6,383 5.3 97.7 98
Round 6 6,498 90 10 18 6,480 5,120 4.8 79.0 90
Round 7 5,176 36 5 6 5,170 4,513 5.2 87.3 85
Round 8 4,558 27 3 10 4,548 3,984 5.8 87.6 80
Round 9 4,006 10 4 10 3,996 3,603 4.7 90.2 90
Panel 24 Round 1 9,976 153 43 82 10,090 7,186 11.8 71.2 80
Round 2 7,211 98 19 5 7,323 6,777 7.9 92.5 95
Round 3 6,812 76 9 7 6,890 6,289 6.0 91.3 96
Round 4 6,335 44 4 13 6,370 5,446 5.1 85.5 97
Round 5 5,510 31 4 15 5,495 4,770 5.3 86.8 85
Round 6 4,816 22 8 8 4,808 3,959 5.7 82.3 80
Round 7 4,007 28 0 5 4,002 3,500 5.3 87.5 87
Round 8 3,528 14 0 9 3,519 3,121 5.9 88.7 85
Panel 25 Round 1 10,008 184 38 78 10,152 6,265 9.6 61.7 80
Round 2 5,907 49 14 12 5,958 4,677 5.5 78.5 95
Round 3 5,191 38 5 2 5,189 4,230 6.1 81.5 80
Round 4 4,314 40 10 7 4,307 3,685 7.3 85.6 97
Round 5 3,712 11 5 6 3,706 3,278 5.3 88.4 85
Panel 26 Round 1 9,674 160 29 68 9,795 5,882 11.1 60.1 70
Round 2 6,047 83 11 2 6,045 4,799 9.0 79.4 95
Round 3 4,882 42 4 6 4,876 4,103 6.8 84.1 83
Round 4 4,165 30 10 4 4,161 3,805 7.6 91.4 97
Panel 27 Round 1 10,085 193 28 78 10,007 6,158 13.2 61.5 65
Round 2 6,288 68 11 3 6,285 5,368 8.9 85.4 80

*Figures in the table are weighted to reflect results of the interim nonresponse subsampling procedure implemented in the first round of Panel 16.

Return To Table Of Contents

Table 4-4. Response rates by data collection year, 2013-2022

Year/Panel Round 1 Round 2 Round 3 Round 4 Round 5 Round 6 Round 7 Round 8 Round 9
2013
Panel 18 74.2 92.9
Panel 17 95.2 95.5
Panel 16 97.6
2014
Panel 19 71.8 93.6
Panel 18 94.5 97.1
Panel 17 98.5
2015
Panel 20 73.5 93.4
Panel 19 94.7 96.7
Panel 18 98.4
2016
Panel 21 74.4 93.0
Panel 20 95.1 96.8
Panel 19 98.3
2017
Panel 22 72.6 93.3
Panel 21 94.1 96.8
Panel 20 96.4
2018
Panel 23 72.9 92.9
Panel 22   95.0 96.7
Panel 21   97.8
2019
Panel 24 71.2 92.5
Panel 23   94.6 96.2
Panel 22   98.3
2020
Panel 25 61.7 78.5
Panel 24   91.3 85.5
Panel 23   97.7 79.0
2021
Panel 26 60.1 79.4
Panel 25 81.5 85.6
Panel 24 86.8 82.3
Panel 23 87.3 87.6
2022
Panel 27 61.5 85.4
Panel 26 84.1 91.4
Panel 25 88.6
Panel 24 87.5 88.7
Panel 23 90.2

Return To Table Of Contents

Table 4-5 illustrates the mode of data collection for each of the 2022 data collection rounds. CAVI interviews were offered as the first alternative to in-person, and for Round 8 as the primary mode. In all cases, telephone was the least-preferred mode due to concerns regarding data quality and respondent engagement in the study.

Table 4-5. Completed cases by mode of interviewing for Panels 23 through 27

Completes In-Person Telephone CAVI
Panel 23 Round 9 327 3,212 63
Panel 24 Round 7 362 3,047 91
Round 8 499 1,342 1,280
Panel 25 Round 5 1,736 1,467 75
Panel 26 Round 3 2,638 1,271 194
Round 4 2,812 426 567
Panel 27 Round 1 4,756 1,117 285
Round 2 4,175 482 711

Return To Table Of Contents

Components of Response and Nonresponse

Table 4-6 summarizes components of nonresponse associated with the Round 1 households by panel beginning in 2017. Prior to 2020 the components of nonresponse remained relatively stable. Starting in 2020, the “refusal” and “other nonresponse” categories have shown a significant increase. Increases and decreases in the percentage of refusals align closely with corresponding decreases and increases in the completion rate.

Table 4-6. Summary of MEPS Round 1 response and nonresponse, 2017-2022 Panels

Response and nonresponse components 2017
P22R1
2018
P23R1
2019
P24R1
2020
P25R1
2021
P26R1
2022
P27R1
Total sample 10,255 10,199 10,172 10,230 9,863 10,085
Out of scope (%) 0.8 1.1 0.8 0.8 0.7 0.8
Complete (%) 72.6 72.9 70.6 61.2 59.6 61.1
Nonresponse (%) 27.4 27.1 28.6 38.0 39.7 38.2
Refusal (%) 21.8 22.4 24.0 28.7 31.2 30.4
Not located (%) 3.9 3.1 3.1 3.2 4.3 3.3
Other nonresponse (%) 1.7 1.7 1.5 6.1 4.2 4.5

Return To Table Of Contents

Tables 4-7 through 4-14 summarize results for additional aspects of the 2022 data collection. Because Round 1 is the most difficult of all the rounds, the presentation focuses primarily on Panel?27, Round 1.

Table 4-7. Summary of MEPS Round 1 response, 2017-2022 panels, by NHIS completion status

NHIS completion status
2017
P22R1
2018
P23R1
2019
P24R1
2020
P25R1
2021
P26R1
2022
P27R1
Original NHIS sample (N) 9,835 9,839 9,864 9,866 9,509 9,700
Percent complete in NHIS 81.0 80.4 84.2 89.3 85.3 83.3
Percent partial complete in NHIS 19.0 19.6 15.8 10.7 14.7 16.7
Percent complete for NHIS completes 75.4 75.4 73.5 63.5 63.1 64.2
Percent complete for NHIS partial completes 62.0 63.6 60.3 46.8 44.1 49.5

Note: Figures shown are based on original NHIS sample and exclude RUs added to the sample as “splits” and “students.”

Return To Table Of Contents

NHIS Completion Status

Each year the MEPS sample includes a number of households classified in the NHIS as “partial completes,” in which the interviewer was able to complete part, but not all, of the full NHIS interview. Given the NHIS redesign implemented in 2018, the partial completes included in the 2022 MEPS sample included some cases that completed only the roster module of the NHIS. The MEPS experience has been that for many of these NHIS cases, the difficulty experienced by the NHIS interviewer carries over to the MEPS interview: the MEPS response rate for the NHIS partial completes is substantially lower than for the NHIS completes. As noted in Chapter 1, for the 2022 sample, AHRQ repeated the step taken since 2012 of sampling the NHIS partial completes in the “White/other” category at a lower rate than the NHIS completes.

The upper portion of Table 4-7 shows the proportion of partial completes in the sample over recent years. Across all domains, there was a significant drop in the proportion of the sample classified as partial complete in 2020 from all the previous years shown on the table. Since then, the proportion of partial completes has increased. The lower portion of the table shows the persistent and substantial difference in response rate between these two components of the sample. Prior to 2020, among the cases originally delivered from the NHIS (that is, with new reporting units discovered during the MEPS interviewing excluded from the counts), the response rate for the NHIS partial completes averaged around 13 percentage points fewer than that for the NHIS completes. In 2020, that difference jumped up to 16.7 percentage points, and there is a 19-point difference in 2021. In 2022, the difference is more in line with years prior to 2020, at 14.7 percentage points.

Sample Domain

Table 4-8 breaks out response information for the NHIS completes and partial completes by sample domain categories for Panel 27. Table 4-8, unlike Table?4-7, does include reporting units added to the sample during Round 1 data collection; it shows the differential in response rates between the NHIS partial completes and full completes persisting across all of the domains. The difference across the full 2022 sample was 14.1 percentage points, with NHIS partial completes responding at a lower rate in all domains. Within the individual domains the difference between the response rate for the NHIS completes and the NHIS partials was greatest for the White/other domain?18.1 percentage points.

Table 4-8. Summary of MEPS panel 27 round 1 response rates, by sample domain by NHIS completion status

Domain/NHIS status Net sample (N) Complete (%) Refusal (%) Not located (%) Other nonresponse (%)
Asian 794 54.7 34.6 4.4 6.3
NHIS complete 638 58.1 31.7 3.9 6.3
NHIS partial complete 156 40.4 46.8 6.4 6.4
Black 1,357 70.4 21.3 3.4 4.9
NHIS complete 1,071 72.7 19.6 3.2 4.5
NHIS partial complete 286 61.9 27.6 4.2 6.3
Hispanic 1,944 65.1 27.9 4.1 2.9
NHIS complete 1,520 67.2 25.8 3.9 3.1
NHIS partial complete 424 57.3 35.6 4.9 2.1
White/other 5,912 59.3 33.1 2.9 4.8
NHIS complete 5,081 61.8 31.2 2.7 4.5
NHIS partial complete 831 43.7 44.8 5.0 6.5
All groups 10,007 61.5 30.6 3.3 4.5
NHIS complete 8,310 63.9 28.7 3.0 4.4
NHIS partial complete 1,697 49.8 39.8 5.0 5.4

Note: Includes reporting units added to sample as “splits” and “students” from original NHIS households, which were given the same “complete” or “partial complete” designation as the original household.

Return To Table Of Contents

Table 4-9 (shown on the next page) further breaks out response information for Panel 27 by interview mode.

Table 4-9. Summary of MEPS Panel 27 Round 1 response rates, per interview mode, by sample domain by NHIS completion status

Domain/NHIS status In-person Telephone CAVI
Asian 270 126 38
NHIS complete 225 110 36
NHIS partial complete 45 16 2
Black 774 144 37
NHIS complete 628 121 30
NHIS partial complete 146 23 7
Hispanic 995 233 38
NHIS complete 805 187 30
NHIS partial complete 190 46 8
White/other 2,717 614 172
NHIS complete 2,445 538 157
NHIS partial complete 272 76 15
All groups 4,756 1,117 285
NHIS complete 4,103 956 253
NHIS partial complete 653 161 32

Return To Table Of Contents

Refusals and Refusal Conversion

Table 4-10 summarizes the results of refusal conversion efforts by panel. The rate of “ever refused” for RUs in Panel 27 was down to 37.7 percent from its highest level in Panel 26.

Summary of MEPS Round 1 results for RUs who ever refused, Panels 21 through 27

Panel Net sample (N) Ever refused (%) Converted (%) Final refusal rate (%) Final response rate (%)
Panel 21 10,316 29.1 29.0 20.2 74.4
Panel 22 10,169 30.1 27.6 21.8 72.6
Panel 23 10,089 31.3 25.6 22.4 72.9
Panel 24 10,090 32.6 23.4 24.2 71.2
Panel 25 10,152 34.8 12.3 28.9 61.7
Panel 26 9,795 40.4 19.3 31.4 60.0
Panel 27 10,007 37.7 14.8 30.6 61.5

Return To Table Of Contents

Tracing and Locating

Table 4-11 shows results of locating efforts for households that required tracking during the Round 1 field period by panel. The percent of households that required some tracing in 2022 (11%) dropped 0.3 percent from 2021 and saw its lowest rate in many years; the final rate of households that were not located after tracing efforts also dropped to 3.3 percent from its highest point in 2021.

Table 4-11. Summary of MEPS Round 1 results for RUs who were ever traced, Panels 21 through 27

Panel Total sample (N) Ever traced (%) Not located (%)
Panel 21 10,405 12.8 3.7
Panel 22 10,228 13.0 3.9
Panel 23 10,199 12.7 3.0
Panel 24 10,172 12.6 3.0
Panel 25 10,230 11.7 3.2
Panel 26 9,863 11.3 4.3
Panel 27 10,085 11.0 3.3

Return To Table Of Contents

Interview Length

Table 4-12 shows the mean length (in minutes) for interviews conducted without interruption in a single session in Panels 21 through 27. Starting in 2020, with the pandemic shutdown, everything moved to telephone interviews; in 2021, a large number of interviews were still conducted by telephone, which took longer as interviewers had to read the show cards aloud, thus adding time to the interview. In 2022, interview time was down. The reduction is largely attributable to the introduction of electronic signature and DocuSign for AFs. In most cases, interviewers no longer have the burden of preparing paper AFs for household member signature.

Table 4-12. Interview timing comparison, Panels 21 through 27 (mean minutes per interview, single-session interviews)

Round Panel 21 Panel 22 Panel 23 Panel 24 Panel 25 Panel 26 Panel 27
Round 1 75.5 79.9 78.1 79.5 89.0 92.9 82.3
Round 2 85.3 88.8 88.2 87.0 89.7 93.3 79.3
Round 3 93.4 93.0 92.6 98.5 100.0 76.5
Round 4 82.7 84.3 86.8 86.2 93.2
Round 5 76.0 78.8 78.7 97.1 75.5
Round 6 88.4 89.7
Round 7 96.6 85.4
Round 8 90.1 78.5
Round 9 76.5

Return To Table Of Contents

Table 4-13 shows the mean length (in minutes) by mode for interviews conducted without interruption in a single session. While CAVI interviews tend to be slightly longer, some of this time is accounted for by the equipment setup and procedures necessary to conduct a Zoom interview.

Table 4-13. Interview timing comparison by interview mode for Panels 23 through 27 (mean minutes per interview, single-session interviews)

Panel/Round In-person Telephone CAVI
Panel 23 Round 9 73.1 76.9 80.6
Panel 24 Round 7 87.2 85.2 87.4
Round 8 76.0 76.3 82.0
Panel 25 Round 5 76.8 73.7 83.7
Panel 26 Round 3 91.7 85.8 94.4
Round 4 78.0 69.5 74.1
Panel 27 Round 1 82.2 83.2 90.1
Round 2 79.3 73.4 82.6

Return To Table Of Contents

Mean Contact Attempts Per Case

Table 4-14 shows mean contact attempts, by mode and NHIS completion status, for all cases in Round 1 of Panels 25 through 27. The number of contacts required per case in Panel 27 dropped significantly compared to 2020 and 2021.

Table 4-14. Mean contact attempts by NHIS completion status and interview mode, Round 1 of Panels 25 through 27

Contact type Panel 25, Round 1 Panel 26, Round 1 Panel 27, Round 1
All RUs Complete Partial All RUs Complete Partial All RUs Complete Partial
N 9,866 8,814 1,052 9,509 8,113 1,396 9,700 8,077 1,623
% of all RUs 100.0 89.3 10.7 100.0 85.3 14.7 100.0 83.3 16.7
In-person 2.6 2.5 2.6 2.4 2.3 3.1 5.6 6.1 5.7
Telephone 9.7 9.5 11.6 8.8 8.7 9.8 8.7 8.7 9.4
CAVI           10.6 10.6 11.3
Total 14.4 14.1 17.0 13.1 12.8 14.9 8.4 8.2 9.3

Return To Table Of Contents

4.3 Data Collection Results: Authorization Form Signing Rates

During the Respondent Forms section of the MEPS CAPI interview, interviewers are prompted to ask respondents to sign the AFs needed to conduct the Medical Provider Component (MPC) of MEPS. AFs are requested for each unique person-provider pairing identified during the interviews as a source of care to a key member of the household. Medical provider AFs are requested for physicians seen in an office-based setting; for inpatient, outpatient, or emergency room care received in a hospital; for care received from a home health agency; for telehealth; and for certain stays in long-term-care institutions. Pharmacy AFs are requested for each pharmacy from which a household member obtained prescription medicines.

Prior to 2022 all AFs were paper documents signed by pen. Starting in 2022, two electronic signature options were introduced. Respondents who are available at the time of the in-person interview may sign their forms electronically on the laptop. If a respondent is not available or not willing to sign at the time of the in-person interview, or if the interview is being conducted by CAVI or telephone, the respondent may be sent a link via text or email to sign their forms electronically in DocuSign. AFs may still be signed on paper if a respondent is not available to sign on the laptop and does not have a cellphone or email for DocuSign, if the respondent requests paper, or if the signer is outside the RU.

Table 4-15 shows round-by-round signing rates for the medical provider AFs for Panels 20 through 27. Starting with the rounds fielded in 2022, the rates are shown for each signature method and combined across all methods. Across all rounds in 2022, the eSignature rate is above 90 percent. As a result, the overall signing rate is more in line with 2019 rates, before the pandemic.

Table 4-15. Signing rates for medical provider authorization forms for panels 20 through 27

Panel/Round Signature method Authorization forms requested Authorization forms signed Signing rate (%)
Panel 20 Round 1 2,354 1,603 68.1
Round 2 25,334 18,479 72.9
Round 3 22,851 15,862 69.4
Round 4 18,234 14,026 76.9
Round 5 16,274 12,100 74.4
Panel 21 Round 1 2,037 1,396 68.5
Round 2 22,984 17,295 75.2
Round 3 20,802 14,898 71.6
Round 4 16,487 13,110 79.5
Round 5 20,443 16,247 79.5
Panel 22 Round 1 2,274 1,573 69.2
Round 2 22,913 17,530 76.5
Round 3 26,436 19,496 73.7
Round 4 23,249 18,097 77.8
Round 5 17,171 12,168 70.9
Panel 23 Round 1 1,982 1,533 77.3
Round 2 29,576 21,850 73.9
Round 3 23,365 14,575 62.4
Round 4 19,220 13,483 70.2
Round 5 17,569 10,903 62.1
Round 6 12,701 8,002 63.0
Round 7 13,254 8,108 61.2
Round 8 11,589 7,624 65.8
Round 9 eSignature 597 542 90.8
DocuSign 5,867 4,528 77.2
Paper 2,601 1,172 45.1
Combined 9,065 6,242 68.9
Panel 24 Round 1 2,285 1,306 57.2
Round 2 24,755 15,865 64.1
Round 3 22,657 11,522 50.9
Round 4 14,612 7,716 52.8
Round 5 15,992 8,941 55.9
Round 6 11,366 6,658 58.6
Round 7 eSignature 860 799 92.9
DocuSign 6,856 4,997 72.9
Paper 3,032 1,254 41.4
Combined 10,748 7,050 65.6
Round 8 eSignature 1,121 1,055 94.1
DocuSign 4,997 3,500 70.0
Paper 1,625 661 40.7
Combined 7,743 5,216 67.4
Panel 25 Round 1 3,110 1,242 39.9
Round 2 15,259 7,292 47.8
Round 3 15,932 8,100 50.8
Round 4 11,252 7,204 64.0
Round 5 eSignature 3,796 3,570 94.0
DocuSign 3,336 2,339 70.1
Paper 1,877 431 23.0
Combined 9,009 6,340 70.4
Panel 26 Round 1 2,432 1,151 47.3
Round 2 17,765 10,564 59.5
Round 3 eSignature 7,510 7,043 93.8
DocuSign 4,668 2,980 63.8
Paper 2,964 419 14.1
Combined 15,142 10,442 69.0
Round 4 eSignature 6,494 6,195 95.4
DocuSign 2,544 1,420 55.8
Paper 1,351 184 13.6
Combined 10,389 7,799 75.1
Panel 27 Round 1 eSignature 1,222 1,147 93.9
DocuSign 523 285 54.5
Paper 477 39 8.2
Combined 2,222 1,471 66.2
Round 2 eSignature 10,831 10,286 95.0
DocuSign 4,744 2,026 42.7
Paper 2,855 192 6.7
Combined 18,430 12,504 67.8

Return To Table Of Contents

Calculation of the round-by-round collection rate for the medical provider AFs is based on all forms requested during a round. The rates calculated for Rounds 2 through 9 include forms fielded but not signed in an earlier round (nonresponse). Included as well were forms that were fielded in an earlier round and signed but rendered obsolete because the person had another health event with the provider after the date on which the original form was signed.

Table 4-16 shows signing rates for pharmacy AFs for Panels 20 through 27. Pharmacy AFs are requested in Rounds 2 through 9, with follow-up for nonresponse in subsequent rounds similar to that for medical provider AFs. As with the medical provider authorizations forms, the overall signing rate in 2022 is in line with the 2019 pre-pandemic rates.

Table 4-16. Signing rates for pharmacy authorization forms for panels 20 through 27

Panel/Round Signature method Authorization forms requested Authorization forms signed Signing rate (%)
Panel 20 Round 2 12,074 8,796 72.9
Round 3 10,577 7,432 70.3
Round 4 9,099 6,945 76.3
Round 5 8,312 6,339 76.3
Panel 21 Round 2 10,783 7,985 74.1
Round 3 9,540 6,847 71.8
Round 4 8,172 6,387 78.2
Round 5 6,684 5,336 79.8
Panel 22 Round 2 10,510 7,919 75.4
Round 3 8,053 5,953 73.9
Round 4 7,284 5,670 77.8
Round 5 8,048 5,726 71.1
Panel 23 Round 2 8,834 6,514 73.8
Round 3 9,614 6,205 64.5
Round 4 8,486 5,900 69.5
Round 5 8,067 5,101 63.2
Round 6 5,668 3,418 60.3
Round 7 5,417 3,345 61.8
Round 8 5,182 3,341 64.5
Round 9 eSignature 303 269 88.8
DocuSign 2,587 1,983 76.7
Paper 1,240 563 45.4
Combined 4,130 2,815 68.2
Panel 24 Round 2 10,265 6,676 65.0
Round 3 9,096 4,831 53.1
Round 4 7,100 3,636 51.2
Round 5 6,528 3,682 56.4
Round 6 4,783 2,663 55.7
Round 7 eSignature 336 310 92.3
DocuSign 2,763 2,073 75.0
Paper 1,279 547 42.8
Combined 4,378 2,930 66.9
Round 8 eSignature 480 449 93.5
DocuSign 2,238 1,527 68.2
Paper 798 299 37.5
Combined 3,516 2,275 64.7
Panel 25 Round 2 6,783 3,180 46.9
Round 3 6,114 3,146 51.5
Round 4 4,640 2,888 62.2
Round 5 eSignature 1,667 1,572 94.3
DocuSign 1,416 983 69.4
Paper 787 181 23.0
Combined 3,870 2,736 70.7
Panel 26 Round 2 6,961 4,105 59.0
Round 3 eSignature 2,916 2,725 93.4
DocuSign 1,749 1,121 64.1
Paper 1,156 181 15.7
Combined 5,821 4,027 69.2
Round 4 eSignature 2,848 2,710 95.2
DocuSign 1,212 652 53.8
Paper 659 60 9.1
Combined 4,719 3,422 72.5
Panel 27 Round 2 eSignature 4,412 4,178 94.7
DocuSign 1,972 842 42.7
Paper 1,272 73 5.7
Combined 7,656 5,093 66.5

Return To Table Of Contents

4.4 Data Collection Results: Self-Administered Questionnaire (SAQ), Diabetes Care Supplement (DCS), and Collection Rates

Self-administered questionnaires (SAQs) are requested from key adult household members in Rounds 2 and 4. Forms that are not collected in Rounds 2 and 4 are requested again in Rounds 3 and 5. In fall 2022, SAQs were requested from Panel 24 Round 8 respondents as well. Table 4-17 shows the SAQ response rates, including both the round-specific rates and the combined rates after the follow-up round was completed.

Response rates have been declining over time, however. Notably, 2020 saw a significant decrease in response rate as a result of telephone interviewing due to COVID-19. The completion rate for initial requests in 2022 remained low. Overall procedures for the distribution and collection of hard-copy materials have not changed with the exception of additional concentrated follow-up. In an effort to stem the tide and introduce additional electronic aspects to the MEPS collection, multimode (web and paper) SAQs will be implemented in 2023.

In Rounds 3 and 5, key adult household members who have been diagnosed with diabetes were asked to complete a short SAQ, the Diabetes Care Supplement (DCS). Forms not completed for pickup at the time of the interviewer’s visit were followed up upon by telephone in the latter stages of Rounds 3 and 5, but unlike the SAQ, there was no follow-up in the subsequent round for forms not collected in the round when first requested. Response rates for the DCS for Panels 19 through 26 are shown in Table 4-18. Completion rates for the DCS showed a modest but relatively steady decline over time. 2022 experienced a noticeable drop in requests, though the response rate remained about the same.

Table 4-17. Results of Self-Administered Questionnaire (SAQ) collection for Panels 21 through 27

Panel/Round SAQs requested SAQs completed SAQs refused Other nonresponse Response rate (%)
Panel 21 Round 2 13,143 10,212 1,170 1,761 77.7
Round 3 2,585 1,123 893 569 43.4
Combined, 2016 13,143 11,335 - - 86.2
Round 4 12,021 9,966 1,149 906 82.9
Round 5 2,078 834 884 360 40.1
Combined, 2017 12,021 10,800 - - 89.8
Panel 22 Round 2 12,304 9,929 1,086 1,289 80.7
Round 3 2,287 840 749 698 36.7
Combined, 2017 12,304 10,769 - - 87.5
Round 4 11,333 8,341 1,159 1,833 73.6
Round 5 2,090 811 896 383 38.8
Combined, 2018 11,333 9,152 - - 80.8
Panel 23 Round 2 12,349 8,711 1,364 1,289 70.5
Round 3 2,364 819 907 638 34.6
Combined, 2018 12,349 9,530 - - 77.2
Round 4 11,290 8,554 1,515 1,221 75.8
Round 5 2,711 983 923 805 36.3
Combined, 2019 11,290 9,537 - - 84.5
Round 6 8,537 4,732 682 3,123 55.4
Round 7 3,229 1,123 707 1,399 34.8
Combined, 2020 8,537 5,855 - - 68.6
Round 8 6,446 3,377 799 2,270 52.4
Round 9 2,654 724 633 1,297 27.3
Combined, 2021 6,446 4,101 - - 63.6
Panel 24 Round 2 12,027 8,726 1,641 1,660 72.6
Round 3 2,810 860 832 1,118 30.6
Combined, 2019 12,027 9,586 - - 79.7
Round 4 9,257 4,247 786 4,224 45.9
Round 5 4,224 1,476 838 1,910 34.9
Combined, 2020 9,257 5,723 - - 61.8
Round 6 6,440 3,196 819 2,425 49.6
Round 7 2,695 696 628 1,371 25.8
Combined, 2021 6,440 3,892 - - 60.4
Round 8 4,906 2,347 634 1,925 47.8
Panel 25 Round 2 8,109 3,555 529 4,025 43.8
Round 3 4,016 1,322 717 1,977 32.9
Combined, 2020 8,109 4,877 - - 60.1
Round 4 6,089 3,309 850 1,930 54.3
Round 5 2,325 655 583 1,087 28.2
Combined, 2021 6,089 3,964 - - 65.1
Panel 26 Round 2 8,419 4,609 1,009 2,801 54.7
Round 3 2,950 853 732 1,365 28.9
Combined, 2021 8,419 5,462 - - 64.9
Round 4 6,370 3,399 898 2,073 53.4
Panel 27 Round 2 9,690 4,669 1,529 3,492 48.2

Return To Table Of Contents

Table 4-18. Results of Diabetes Care Supplement (DCS) collection for Panels 19 through 26

Panel/Round DCSs requested DCSs completed Response rate (%)
Panel 19 Round 3 1,272 1,124 88.4
Round 5 1,316 1,144 87.2
Panel 20 Round 3 1,412 1,190 84.5
Round 5 1,386 1,174 84.9
Panel 21 Round 3 1,422 1,170 82.5
Round 5 1,481 1,212 81.8
Panel 22 Round 3 1,453 1,177 81.0
Round 5 1,348 1,018 75.5
Panel 23 Round 3 1,464 1,101 75.2
Round 5 1,350 933 69.1
Round 7 1,018 648 63.7
Round 9 813 446 54.9
Panel 24 Round 3 1,350 843 62.4
Round 5 1,082 599 55.4
Round 7 817 443 54.2
Panel 25 Round 3 963 514 53.4
Round 5 758 419 55.3
Panel 26 Round 3 894 516 57.7

Return To Table Of Contents

4.5 Quality Control

Interviewer performance was monitored through validation case review using GPS, CARI, and telephone interviews. The purpose of validation was to verify that the correct individual was contacted for the interview and that the interview was conducted according to MEPS-approved procedures.

Generally, all completed cases were validated by first examining the GPS data stored and encrypted on the laptop. Then, if the case could not be properly validated due to missing data or the GPS information could not be verified to show the interviewer at the respondent address or another documented location at the time of the interview, the case was then reviewed in the CARI system. If a case could not be validated in CARI due to poor quality or missing CARI data, the case was referred for telephone validation. All interviews completed in less than 30 minutes were also referred for telephone validation. Finally, for cases assigned to telephone validation, if the household could not be reached, a validation questionnaire was mailed with a return envelope.

In both the spring and fall rounds of 2022, about 97 percent of completed cases were validated. In the spring rounds, the rate of cases validated by CARI was higher at 65.9 percent compared to 51.4 percent in the fall rounds. The rate of cases validated with GPS data, however, was higher in the fall rounds at 37.8 percent compared to 22.7 percent in the spring. This is likely attributed to the increase in in-person interviews in the second half of 2022, which made GPS data available for more cases. Only 7.8 percent of completed cases were validated by phone in both the spring and fall rounds, and a very small share were validated by mail—less than 0.4 percent in both the spring and fall. While 97 percent of all completed cases were validated in 2022, the percent of each interviewer’s completed cases that were validated averaged 82 percent in the spring rounds and 93 percent in the fall rounds. The increase in the fall rounds was again likely due to the increase in cases that were validated using GPS data.

In addition to validating cases, MEPS field supervisors and managers typically conduct observations as part of a comprehensive mentoring process. Generally, MEPS uses technical solutions in place of in-person observations; however, there are specific needs met by specialized observation. As much as possible, observations are conducted in the early weeks of data collection so that problems can be detected and corrected as quickly as possible and interviewers are given feedback on ways to improve specific interviewing skills. While CARI offers a high-quality portal for evaluating interviewers on question administration, observations are still a critical tool, particularly of newly hired staff. Compared with the observation process, CARI and other report mechanisms do not allow for assessment of the full range of interviewer skills, including respondent contact, trip planning, gaining cooperation, and interviewer-respondent interactions. In addition, the observer serves as an on-site resource in situations where remedial training is necessary. Observation forms are processed and reviewed at the home office to determine the need for individual and field-wide follow-up on specific skills.

Return To Table Of Contents

4.6 Security Incidents

To comply with the requirement of reporting incidents involving loss or theft of laptops or hard-copy materials with respondents? personally identifiable information (PII), field staff continued to use an automated loss reporting system (a system known as ILRS) to report incidents. Incidents were investigated, updates were sent to AHRQ and MEPS staff who received the initial automated ILRS notification, and results were recorded in an annual MEPS PII log. A security incident report was submitted to the Westat IRB for each confirmed incident.

A total of eight incidents of lost or stolen laptops/iPhones or hard-copy PII were reported in 2022. Of those reported incidents, five involved MEPS laptops and/or iPhones that were reported stolen or lost. In one case, the airline that the interviewer had flown on for MEPS travel found and returned the iPhone to Westat in working order. In the other four cases, two iPhones and two laptops were not recovered even though police reports were filed. The password-protected laptops were shut down at the time of the loss. Since MEPS laptops are fully disc-encrypted, respondent identity was not at risk. The MEPS iPhones are also password-protected.

Two of the reported incidents involved suspected or confirmed loss of hard-copy materials with respondent PII loss or breach of confidentiality. In one instance the interviewer’s car was broken into and the laptop (accounted for above), one hard-copy PSAQ, a notebook page with contact information for another household, and three debit cards (without value) were stolen and not recovered. In the other instance of hard-copy loss, the FedEx package was never delivered. FedEx initiated a search but the package was never found. The respondent in each of these cases was contacted and then sent a replacement package.

A new category of potential PII disclosure emerged in 2022 related to the introduction of the DocuSign signing method for AFs. First, a programming error allowed MEPS participants with the same name to be sent DocuSign AFs that corresponded to people with the same name but from other households. When this situation was reported early in the spring field period, the DocuSign envelope production process was stopped, the program code was revised, and testing was performed before the system was restarted. This impacted five households. All were contacted about what happened and all agreed to continue. The second DocuSign related issue happened because of user error, namely a mis-keying of a household member’s phone number. The result was that AFs were sent to the wrong household. This impacted two households. The respondent from one of the households called the MEPS Respondent Hotline to report the error. In all cases of error, access to forms was suspended upon discovery and forms were reissued to the appropriate household.

Return To Table Of Contents

5. Home Office Support of Field Activities

The home office supports the data collection effort in several important ways. This support can be described in two phases: one phase of activity supports the launch of each new round of data collection; another phase supports the field operation while data collection is in progress. These two phases of activity are described in this chapter.

Return To Table Of Contents

5.1 Preparation for Field Activities

Hard-copy materials were assembled prior to data collection for cases fielded in Rounds 3, 5, 7, and 9 during the spring 2022 data collection. These materials consisted of AFs and SAQs outstanding from the previous round. Clerical staff created an RU folder for each case being fielded and inserted any AFs and SAQs that were printed for the case. Since there are no hard-copy case materials generated for Round 1 cases, RU folders were not created prior to data collection for Round 1 cases. With the introduction of electronic AFs during the spring 2022 data collection, the decision was made to no longer pre-print outstanding AFs beginning in the fall 2022 rounds. Additionally, SAQs are mailed to households prior to fall data collection. Therefore, no hard-copy materials were generated, and RU folders were not created for cases fielded for the fall 2022 data collection.

Supervisors received a Supervisor Assignment Log listing all of the cases to be released in their region for each wave of cases to use to assign cases to their interviewers. They entered the ID of the interviewer assigned to each case and sent the log back to the home office. The logs with assignments were then used to make the electronic assignments in the BFOS field management system. In the spring rounds, home office staff also shipped the RU folders directly to the interviewers based on the assignments in the logs for the first wave of cases. For later waves, the RU folders were shipped to regional clerks to distribute to the field interviewers.

Prior to the start of data collection for each period, interviewers connected remotely to the home office to download the CAPI software update for the upcoming rounds and received a home study training package to prepare them for interviewing. Field interviewers also received a replenishment of supplies at the start of the rounds.

Advance mailings to all respondent households were prepared and mailed by the home office staff. Addresses were first standardized and sent through the National Change of Address (NCOA) database to obtain the most current addresses for mailing. Any mail returned as undeliverable was recorded and the appropriate supervisor was notified. Requests to remail the Round 1 advance package to households who reported not receiving it were prepared and mailed by home office staff.

Return To Table Of Contents

5.2 Support During Data Collection

Respondent Contacts.Respondent contacts are an important component of home office support for the MEPS data collection effort. Printed materials mailed to respondents contain an email address and toll-free telephone number that respondents can use to contact the project with questions, and requests to make or to cancel interview appointments; respondents also could choose not to participate in the study. Home office staff received and initiated the response to all respondent contacts. They forwarded information received from respondent calls to the field supervisors, who initiated the appropriate follow-up and informed the home office of the results of their follow-up within 24 hours of notification. Table 5-1 shows the number and percent of RUs that made calls to the respondent hotline in the spring and fall rounds of 2018-2022. There was a significantly higher percentage of calls to the hotline in both spring and fall 2020. In spring 2021, the percentage of calls to the hotline was more in line with years prior to 2020, but it went back up in spring 2022. The percentage of calls in fall 2022 remained consistent with fall 2021, which was down compared to fall 2020 but still higher than in previous years.

Table 5-1. Number and percent of respondents who called the respondent information line, 2018-2022

Original sample size Number of calls Calls as a percent of sample size
Round 1
2018 – Panel 23 Round 1 9,846 383 3.9
2019 – Panel 24 Round 1 9,864 343 3.5
2020 – Panel 25 Round 1 9,880 586 5.9
2021 – Panel 26 Round 1 9,509 335 3.5
2022 – Panel 27 Round 1 9,700 426 4.4
Rounds 3/5
2018 – Panel 21 Round 5/Panel 22 Round 3 13,922 467 3.4
2019 – Panel 22 Round 5/Panel 23 Round 3 13,594 486 3.6
2020 – Panel 23 Round 5/Panel 24 Round 3 13,241 592 4.5
2021 – Panel 23 Round 7/Panel 24 Round 5/Panel 25 Round 3 15,616 555 3.6
2022 – Panel 23 Round 9/Panel 24 Round 7/Panel 25 Round 5/Panel 26 Round 3 16,399 818 5.0
Rounds 2/4
2018 – Panel 22 Round 4/Panel 23 Round 2 14,123 524 3.7
2019 – Panel 23 Round 4/Panel 24 Round 2 13,844 531 3.8
2020 – Panel 23 Round 6/Panel 24 Round 4/Panel 25 Round 2 18,480 1,163 6.3
2021 – Panel 23 Round 8/Panel 24 Round 6/Panel 25 Round 4/Panel 26 Round 2 19,339 848 4.4
2022 – Panel 24 Round 8/Panel 26 Round 4/Panel 27 Round 2 13,735 584 4.3

Return To Table Of Contents

Table 5-2 shows the number and types of calls received on the respondent hotline during 2021 and 2022. As in prior years, a substantial portion of the Round 1 calls were for refusals. In spring 2022 there was a higher percentage of calls for appointments in all rounds compared to the previous year. However, in the fall rounds the percentage of calls for appointments decreased significantly from the previous year.

Table 5-2. Calls to the respondent information line, 2021 and 2022

Reason for call Spring 2021(Panel 26 Round 1, Panel 25 Round 3, Panel 24 Round 5, Panel 23 Round 7) Fall 2021(Panel 26 Round 2, Panel 25 Round 4, Panel 24 Round 6, Panel 23 Round 8)
Round 1 Rounds 3, 5, 7 Rounds 2, 4, 6, 8
N % N % N %
Address/telephone change 2 0.6 19 3.4 59 7.0
Appointment 27 8.1 76 13.7 233 27.5
Request callback 101 30.1 240 43.2 287 33.8
No message 34 10.1 21 3.8 41 4.8
Other 8 2.4 48 8.6 8 0.9
Proxy needed 0 0.0 7 1.3 13 1.5
Request SAQ help 3 0.9 17 3.1 15 1.8
SAQ refusal 0 0.0 1 0.2 0 0.0
Special needs 0 0.0 2 0.4 1 0.1
Refusal 87 26.0 87 15.7 176 20.8
Willing to participate 73 21.8 37 6.7 15 1.8
Total 335 555 848


Reason for call Spring 2022(Panel 27 Round 1, Panel 26 Round 3, Panel 25 Round 5, Panel 24 Round 7, Panel 23 Round 9) Fall 2022 (Panel 27 Round 2, Panel 26 Round 4, Panel 24, Round 8)
Round 1 Rounds 3, 5, 7, 9 Rounds 2, 4, and 8
N % N % N %
Address/telephone change 4 0.9 42 5.1 25 4.3
Appointment 91 21.4 215 26.3 99 17.0
Request callback 130 30.5 236 28.9 260 44.5
No message 13 3.1 23 2.8 22 3.8
Other 21 4.9 236 28.9 84 14.4
Proxy needed 4 0.9 6 0.7 6 1.0
Request SAQ help 0 0.0 0 0.0 0 0.0
SAQ refusal 0 0.0 0 0.0 0 0.0
Special needs 0 0.0 0 0.0 0 0.0
Refusal 119 27.9 58 7.1 82 14.0
Willing to participate 44 10.3 2 0.2 6 1.0
Total 426 818 584

Return To Table Of Contents

Monitoring Production. Home office staff monitored production, cost, and data quality, and provided reports and feedback to field managers and supervisors for review and follow-up. Reports were generated weekly and distributed to AHRQ; showing weekly and cumulative field production data, response rates, and costs.

Home Office Support. Refusal letters were generated and mailed by home office staff as requested by the field. Home office staff also responded to supply requests from the field, replenishing interviewer and supervisor stocks of materials as needed.

Receipt Control. As interviewers completed cases, they transmitted the data electronically and shipped any hard-copy documents to the home office receipt operation. Interviewers shipped all hard-copy material containing PII via Fedex, which facilitates tracking of late or lost shipments. When preparing a shipment to the home office receipt department, interviewers used the Ship to Receipt module in BFOS to indicate exactly what materials were included in the package and recorded the FedEx tracking number. This information was sent directly to the receipt control system so it was known what materials were expected. For interviews completed by phone or CAVI and for which pickup of hard-copy documents could not be arranged, interviewers provided a BRE for the respondent to send their documents directly to the home office. AFs signed electronically, either on the laptop or in DocuSign, were uploaded to a secure server to be accessed for receipt. Paper AFs were reviewed by receipt staff, then scanned and uploaded to the secure server. When a problem was found in an AF, the problem was documented and feedback was sent to the field supervisor to review with the interviewer. All self-administered questionnaires, including SAQs/PSAQs, and DCSs, were receipted and sent out for TeleForm scanning.

Helpdesk Support. The MEPS CAPI Helpdesk continued to provide technical support for field interviewing activities during 2022. Helpdesk staff were available 7 days a week to help field staff resolve CAPI, Field Management System, transmission, laptop, and iPhone problems. Incoming calls were documented for follow-up as needed to resolve individual issues and to identify issues reported by multiple interviewers. The CAPI Helpdesk coordinated tracking and shipping of all field laptops, field laptop assignment, and laptop and phone repairs.

Return To Table Of Contents

6. Data Processing and Data Delivery

This chapter briefly describes the activities that supported Westat’s data delivery work during the year and identifies the principal files related to data year 2020 delivered in 2022.

Return To Table Of Contents

6.1 Processing to Support Data Delivery

6.1.1 Schedules for Data Delivery

Adhering to the schedule for delivery of the key MEPS public use files is of paramount importance to the project. Throughout 2022, data processing activities to support the major file deliveries for the year proceeded simultaneously along several different delivery paths, with activity focused separately on each of the panels for the annual full-year files. As in past years, the project used a set of comprehensive data delivery schedules to guide management of the effort. The schedules integrate key dates for the data collection, data capture, coding, editing and imputation, weights construction, and documentation production tasks. These schedules provide a framework for assessing the potential impact of proposed changes at the start of each processing cycle and for coordinating the succession of processes that comprise the delivery effort.

Return To Table Of Contents

6.1.2 Data Quality Control System

The data quality control (DQC) system consists of both a consolidated database that preserves data as returned from the field, and a DQC-specific database that shows the current values of data following any required updates. DQC technicians access the data through a secure portal.

Technicians review and edit the data using the Blaise database model that is used in the field for data collection. All DQC work occurs at a “case” level. The DQC system automatically creates a unique “issue” for each instance of text entered as a comment and includes the comment category selected by the field interviewer associated with the text entry. As cases are loaded into DQC, each comment and category is checked by a Natural Language Processing (NLP) algorithm that identifies the most likely category. During processing, data technicians have the opportunity to accept or update this category. Technicians then follow standardized procedures for data review and editing based on the comment category.

The DQC system also runs a series of programmatic checks and assigns a new “issue” for each instance that triggers a consistency or edit check. These checks are designed to ensure that data changed during editing conform fully to the rules of the CAPI instrument before the data are released. In addition, issues are, on rare occasion, added manually to individual cases by DQC staff from MEPS Help Desk reports, such as when a name or email address is discovered to be misspelled after completion of the interview; these issues are included among the number of cases with at least one interviewer comment. During spring 2022, 12.1 percent of cases received from the field included a comment (Table 6-1). Cases with any issue, a field comment, or a consistency check totaled 34.3 percent. For fall 2022, 12.7 percent of cases received from the field included a comment while cases with any issue totaled 25.0 percent.

Table 6-1. 2022 cases with comments or data check issues

Field period Cases processed Cases with at least 1 comment % cases with comments Cases with at least 1 issue % cases with issues Not actionable (comments) % NA comments
Spring 2022 20,697 2,497 12.1 7,091 34.2 2,143 51.8
Fall 2022 12,302 1,565 12.7 3,073 25.0 1,461 57.4

Return To Table Of Contents

Field interviewers must select one of 10 categories for each comment text string; after selecting a category, CAPI provides category-specific guidance on information to include in the comment (e.g., RU member name, event date). They receive training to help identify the most meaningful category and avoid overuse of the category “Other.” Table 6-2 shows the number of comments made in each category as assigned by the NLP algorithm and confirmed by the data technicians.

Table 6-2. Total number of comments by category

Total number of comments by category # %
1. RU/RU Member 419 6.3
2. RU Member Refusal 92 1.4
3. Condition 166 2.5
4. Health Care Events 3,580 53.3
5. Glasses/Contact Lenses 51 0.8
6. Other Medical Expenses 78 1.2
7. Prescribed Medicines 712 10.7
8. Employment 476 7.1
9. Health Insurance 576 8.6
10. Other 555 8.3
Total 6,685

Return To Table Of Contents

6.1.3 Transformation

Transformation is the process s of extracting data from the Blaise data models optimized for data collection and writing them to the data exchange format (Dex) required by the data delivery teams. The transformation has two logical activities: First is transforming the structure of the data from data collection to Dex and then transforming the format of the data from Blaise to Oracle. The resulting data, now stored in Oracle using the Dex structure, serves as input to the analytic editing, variable construction, public use files (PUFs), and other file deliveries. The goal is to dislocate the delivery activities as little as possible in order to provide data of the highest quality as efficiently as possible.

As shown in Figure 6-1, data transformation has four distinct layers. The metadata layer contains all the variable definitions—including names, tables, or segments or blocks—and transformation logic, sometimes known as plain-language transformation specifications. The analytic group leads at Westat are typically responsible for the metadata and the transformation logic.

Figure 6-1. Blaise to Dex transformation

Figure 6-1 shows the four components (layers) for the Blaise to DEX transformation process.

Based on the metadata, two specifications are developed. The first describes the Dex structure using a formal schema, which is expressed as a set of SQL statements to create the empty Oracle Dex database. The second specification is the detailed transformation specification. Each variable is assigned to a set of similar variables called a transformation class. A unique transformation class is defined by the information needed to specify the transformation. For instance, some variables simply need to be copied to an appropriate location in the Dex. These are known as passthrough variables and belong to the Passthrough class. Code All That Apply variables are transformed based on the value selected by the interviewer, so the specification requires an additional Dex variable for each possible value. Code All That Apply is another transformation class. All of the classes are developed through discussions with AHRQ and are sent to AHRQ for approval.

The third layer is the transformation (or programming) layer. Using the specifications just described, the data are read from the Blaise database in the data collection structure, the transformation logic is applied, and a data file for each Dex table is written. The Dex tables are generally identical to the legacy Cheshire segments, such as BASE, HOME, or PERS. This set of intermediate data files is known as pre-Dex and has the same structure as the Dex database, but all files are in the Blaise format. Next, the format is transformed from the Blaise format to Oracle, writing to the Single- Round Database (SRD). The single-round structure is necessary because the data collection instrument does not contain all data for all rounds for a given case; rather, only the data required to field the case in that specific round are included. The SRD data are then merged into the existing data, yielding a cumulative Multi-Round Database (MRD).

The final layer relates the different databases to selected key deliverables. This layer is intentionally general. For example, while the MRD is the source for the PUF deliveries, there are many additional steps to edit the data, construct variables, and deliver a data file and codebook.

Return To Table Of Contents

6.1.4 TeleForm/Data Editing of Scanned Forms

TeleForm, a commercial off-the-shelf (COTS) software system for intelligent data capture and image processing, was used in 2022 to capture data collected in the DCS and the SAQ. TeleForm software reads the form image files and extracts data according to the project specifications. Supporting software checks the data for conformity with project specifications and flags data values that violate the validation rules for review and resolution.

As SAQs evolve to be multimode (web and paper) in 2023, we will update this section to discuss data harmonization and web data collection.

Return To Table Of Contents

6.1.5 Coding

Coding refers to the process of converting data items collected in text format to prespecified numeric codes. For the MEPS-HC, five types of information require coding:

Condition and Prescribed Medicine Coding

In 2022, coding was performed on the conditions and prescribed medicine text strings reported by household respondents for calendar year 2021. An automated system enabled coders to easily search for and assign the appropriate ICD-10-CM code (for conditions) or Generic Product Identifier (GPI) code (for medicines). The system supports the verifier’s review of all codes and, as needed, correction of the coder’s initial decision. For the prescribed medicine coding, a pharmacist provided a further review of text strings questioned by the verifier, uncodable text strings, foreign medicines, and compound drugs. All coding actions are tracked in the system and error rates calculated weekly. Both the condition and prescribed medicine coding efforts were staffed by three coders.

During the 2022 coding cycle, coding managers continued to refine a number of new and revised procedures and processes implemented for the coding of 2018 data in 2019. These revisions were a result of many months of collaboration between AHRQ and Westat in evaluating all aspects of the coding processes for household reported conditions, prescribed medicines, and sources of payment, including updating and maintaining the authority tables and the development of tools and resource documents to facilitate the execution of these tasks. Also in 2019, Westat deployed a new web-based coding system for condition and prescribed medicine coding to replace the Access database previously used. The new system better supports downstream-processing activities and aligns with other web-based systems used across other components of MEPS. All aspects of coding work are supported by a number of scheduled quality control checks before, during, and after each coding cycle.

In 2022, medical conditions were coded to include the greatest specificity indicated by the text string. The fully specified ICD-10 code is needed to accurately match to the CCSR. A total of 2,863 unique strings were manually coded and the authority table was constructed with AHRQ-approved code assignments. This represented a 71-percent reduction in the average number of strings needing manual review before the implementation of the condition pick list and search tool was integrated into the CAPI instrument. The overall error rate for coders was 1 percent, below the contractual error rate goal of 2 percent.

Prescription medicine text strings for data year 2022 were coded to the set of GPI codes, associated with the Master Drug Data Base (MDDB) maintained by Medi-Span, a part of Wolters Kluwer. The codes characterize medicines by therapeutic class, form, and dosage. To augment the assignment of codes to less-specified and ambiguous text strings, AHRQ developed procedures for assigning partial GPI codes and higher-level drug categories that were implemented in 2017 and continued through subsequent coding cycles. AHRQ also developed a set of exact and inexact matching programs to reduce the number of prescribed medicine strings sent for manual coding. Westat’s implementation of these matching programs reduces the number of prescribed medicine text strings sent for manual coding by approximately 40 percent each year. The matching programs are reviewed and approved each year. A total of 7,135 strings were manually coded from 2022 data. In a process similar to condition text strings, the prescription medicine text strings undergo two rounds of unduplication to identify the unique strings to be coded. AHRQ’s exact and inexact matching programs are then run to further reduce the number of strings to be coded. In the spring of 2022, the prescribed medicine pick list and search tool was integrated into the CAPI instrument, which will impact the number of strings that need manually coding in 2023. The overall coding error rate (across all coders) was 1 percent, 1 percent lower than the contractual goal of 2 percent. As with conditions, all prescription text strings/codes were reviewed by a verifier, with additional review of selected strings provided by a pharmacist.

Source of Payment Coding

Source of payment information (SOP) is collected in both the household and the medical provider components. In the HC charge payment section of the CAPI instrument, the names of the sources of payment are collected in three places: when the bill was paid by a source identified in response to a direct question about payment (REIMNAM); when the bill was sent to a source other than the respondent and the respondent names that source (WHOBILL1); and in response to a question about a direct payment source for prescription medicines (SRCNAME). The responses are coded to one of the sources of payment options in which healthcare expenditures are reported in the MEPS PUFs. These payment sources include:

The SOP Coding Guidelines is a manual updated each year before the start of the annual coding cycle, submitted for AHRQ approval, and distributed to the coders. Health insurance show cards and data from the health insurance plan file for CAPI are available to coders as resource materials. Since the Medical Provider Component (MPC) of MEPS uses the same set of source of payment codes as the Household Component, coding rules and decisions are coordinated with the MPC contractor (RTI) to ensure consistency in the coding. Before the start of the coding cycle, Westat compares RTI’s authority tables with its own to identify any inconsistencies. AHRQ adjudicates these to ensure the authority tables from each contractor are aligned.

Each year, the source of payment text strings extracted from the reference year data is matched to a historical file of previously coded SOP text strings to create a file of matched strings with suggested or “matched” codes. These match-coded strings are reviewed by coders and verified or modified as needed. This review is required because insurance companies change their product lines and coverage offerings very frequently, and as a result, the source of payment code for a given text string (e.g., the name of an insurance company or plan) can change from year to year. For example, from one year to the next an insurer or insurance product may participate in or drop out of state exchanges; may offer Medicare Part D or dental or vision insurance, or may drop it; may add Medicare Advantage plans in addition to Medicaid HMOs; or may gain or lose state contracts as Medicaid service providers. As a result of these changes, the appropriate code for a company or specific plan may also change from year to year. Strings that do not match to a string in the history table are researched and have an appropriate SOP code assigned by coding staff.

SOP coding during 2022 was for the payment sources reported for 2021 events. For cases when the bill was paid by a source identified in response to a direct question about payment (REIMNAM), a total of 1,577 previously coded sources of payment text strings were reviewed and updated as needed. After unduplication of the strings reported for 2021, coders reviewed and coded 1,935 strings. If the bill was sent to a source other than the respondent and the respondent names that source (WHOBILL1), coders reviewed and coded 3,658 strings. For text strings reported as direct payers for prescription medicine (SRCNAME), 554 new text strings were reviewed and coded by coders.

Industry and Occupation Coding

Industry and Occupation coding is performed for MEPS by the Census Bureau using the Census Bureau’s Demographic Surveys Division’s (DSD’s) computer-assisted industry and occupation (I&O) codes, which can be cross-walked to the 2007 North American Industrial Classification (NAIC) coding system, and the 2010 Standard Occupational Classifications (SOC). The codes characterize the jobs reported by household respondents and are released annually on the FY JOBS file. During 2022, 12,409 jobs were coded for the 2021 JOBS file.

GEO Coding

The Westat Geographic Information Systems (GIS) division GEO-codes household addresses, assigning the latitude and longitude coordinates, as well as other variables such as county and state Federal Information Processing Standards (FIPS) codes, Metropolitan Statistical Area (MSA) status, Designated Market Area, Census Place, and county. RU-level data are expanded to the person level and delivered to AHRQ as part of the set of “Master Files” sent yearly. These data are not included in a PUF, but some variables are used for the FY weights processing.

During the calendar year 2022 coding cycle, 22,857 unique address records for full-year reporting units were processed.

Return To Table Of Contents

6.2 Data Delivery

The primary objective of MEPS is to produce a series of data files for public release each calendar year. The inter-round processing, editing, and variable construction tasks all serve to prepare these PUFs. Each file addresses one or more aspects of the U.S. civilian non-institutional population’s access to, use of, and payments for healthcare.

The Oracle system has a separate database for each data year. This is a recent departure from having individual databases for each panel/year combination. The goal of this is to make data processing more streamlined, and this was necessitated by extending Panels 23 and 24 to collect data through nine rounds.

Due to the pandemic, Panels 23 and 24 are being extended through Round 9. The MEPS 2021 database contains Panels 23 through 26, and the MEPS 2022 database contains Panels 24, 26, and 27.

After the data are in the Oracle delivery database, each analytical team performs basic edit checks on the data to begin the process. These edits ensure the data conform to the CAPI instrument’s flow as well as to AHRQ’s analytical needs. These edits can be run in SAS, using SAS datasets extracted from the delivery database, or in SQL directly on the delivery database. Problems identified through the basic edits process may require updates to the data. If updating is required, these updates may be accomplished in one of two ways:

  1. Programmatic updates can correct problems affecting a large volume of cases that fail a basic edit.

  2. Manual updates can be set up with audit trails maintained to correct data anomalies.

Once all the edits have been completed for an analytical team, and QC frequencies and univariates have been approved, notification is sent to all other analytical teams so that work can be coordinated in those areas.

Return To Table Of Contents

6.2.1 Variable Construction

Analytical groups at AHRQ work with Westat analysts to define the variables of interest for inclusion on the PUF and other key data deliveries. Variables are named according to standard naming conventions, and once the list is approved, descriptive specifications are written to define each variable and to provide detailed information for programming.

Specifications are written at two levels. The high-level specification is a descriptive specification intended to document the concept of the variable and provide high-level information regarding the variable construction requirements. The detailed-level specifications contain the details required to develop programming code for building the variables. Specifications are written and sent to AHRQ for approval. Once approval is received for the specification, program development can proceed for that variable.

Specifications guide programming development, and once programs have been written, code reviews compare newly developed code against specifications to identify problems in either code or specifications. This program development process includes a number of steps and checkpoints to ensure that all new programs meet all specification requirements:

  1. Review approved high- and detailed-level specifications.

  2. Write programs for each specification using SAS or SQL.

  3. Test all programmed code for accuracy.

  4. Conduct detailed code reviews to review specifications and code.

  5. Test code on SAS production files or Oracle database without committing.

  6. Construct variables either in SAS (and either load variables to Oracle or continue development in SAS, depending on the file) or directly in the Oracle production database.

  7. Review frequencies and cross-tabulations for accuracy.

This model is followed for the development of all new programs required for data delivery. For mature programs that are reused in subsequent deliveries with only minor modifications, the process is appropriately streamlined to ensure both accuracy and efficiency on all programs.

Return To Table Of Contents

6.2.2 File Deliveries

Public Use File Deliveries

The principal files delivered during calendar year 2022 are listed below:

Ancillary File Deliveries

In addition to the principal data files delivered for public release each year, the project also produces a number of ancillary files for delivery to AHRQ. These include an extensive series of person- and family-level weights, “raw” data files reflecting MEPS data at intermediate stages of capture and editing, and files generated at the end of each round or as needed to support analysis of both substantive and methodological topics. A comprehensive list of the files delivered during 2022 appears in the appendix.

Medical Provider Component (MPC) Files

During each year’s processing cycle, Westat also creates files for the MPC contractor and, in turn, receives data files back from the MPC. As in prior years, Westat provided sample files for the MPC in three waves, with the first two waves delivered while HC data collection was still in progress. In preparing the sample files to be delivered in 2022 for MPC collection of data about 2021 health events, Westat again applied the program developed in 2014 for de-duplicating the sample of providers. This process, developed in consultation with AHRQ, was designed to reduce the number of duplicate providers reported from the household data collection.

Early in 2022, following completion of MPC data collection and processing for 2020 events, Westat received the files containing data collected in the MPC with linkages to matching events collected in the MPC with events collected in the HC. In processing at Westat, matched events from the MPC served as the primary source for imputing expenditure variables for the 2020 events. A similar file of prescribed medicines was also delivered to support matching and imputation of expenditures for the prescribed medicines at AHRQ. Timely and well-coordinated data handoffs between Westat and the MPC are critical to the timely delivery of the full-year expenditure files. With each additional year of interaction and cooperation, the handoffs between the MPC and HC have gone more and more smoothly.

Return To Table Of Contents

Appendix A
Comprehensive Tables – Household Survey

Table A-1. Data collection periods and starting RU-level sample sizes, all panels

Data collection period RU-level sample size*
January-June 1996 10,799
Panel 1 Round 1 10,799
July-December 1996 9,485
Panel 1 Round 2 9,485
January-June 1997 15,689
Panel 1 Round 3 9,228
Panel 2 Round 1 6,461
July-December 1997 14,657
Panel 1 Round 4 9,019
Panel 2 Round 2 5,638
January-June 1998 19,269
Panel 1 Round 5 8,477
Panel 2 Round 3 5,382
Panel 3 Round 1 5,410
July-December 1998 9,871
Panel 2 Round 4 5,290
Panel 3 Round 2 4,581
January-June 1999 17,612
Panel 2 Round 5 5,127
Panel 3 Round 3 5,382
Panel 4 Round 1 7,103
July-December 1999 10,161
Panel 3 Round 4 4,243
Panel 4 Round 2 5,918
January-June 2000 15,447
Panel 3 Round 5 4,183
Panel 4 Round 3 5,731
Panel 5 Round 1 5,533
July-December 2000 10,222
Panel 4 Round 4 5,567
Panel 5 Round 2 4,655
January-June 2001 21,069
Panel 4 Round 5 5,547
Panel 5 Round 3 4,496
Panel 6 Round 1 11,026
July-December 2001 13,777
Panel 5 Round 4 4,426
Panel 6 Round 2 9,351
January-June 2002 21,915
Panel 5 Round 5 4,393
Panel 6 Round 3 9,183
Panel 7 Round 1 8,339
July-December 2002 15,968
Panel 6 Round 4 8,977
Panel 7 Round 2 6,991
January-June 2003 24,315
Panel 6 Round 5 8,830
Panel 7 Round 3 6,779
Panel 8 Round 1 8,706
July-December 2003 13,814
Panel 7, Round 4 6,655
Panel 8, Round 2 7,159
January-June 2004 22,552
Panel 7 Round 5 6,578
Panel 8 Round 3 7,035
Panel 9 Round 1 8,939
July-December 2004 14,068
Panel 8, Round 4 6,878
Panel 9, Round 2 7,190
January-June 200522,548
Panel 8 Round 5 6,795
Panel 9 Round 3 7,005
Panel 10 Round 1 8,748
July-December 2005 13,991
Panel 9, Round 4 6,843
Panel 10, Round 2 7,148
January-June 2006 23,278
Panel 9 Round 5 6,703
Panel 10 Round 3 6,921
Panel 11 Round 1 9,654
July-December 2006 14,280
Panel 10 Round 4 6,708
Panel 11 Round 2 7,572
January-June 2007 21,326
Panel 10 Round 5 6,596
Panel 11 Round 3 7,263
Panel 12 Round 1 7,467
July-December 2007 12,906
Panel 11 Round 4 7,005
Panel 12 Round 2 5,901
January-June 2008 22,414
Panel 11 Round 5 6,895
Panel 12 Round 3 5,580
Panel 13 Round 1 9,939
July-December 2008 13,384
Panel 12 Round 4 5,376
Panel 13 Round 2 8,008
January-June 2009 22,960
Panel 12 Round 5 5,261
Panel 13 Round 3 7,800
Panel 14 Round 1 9,899
July-December 2009 15,339
Panel 13 Round 4 7,670
Panel 14 Round 2 7,669
January-June 2010 23,770
Panel 13 Round 5 7,576
Panel 14 Round 3 7,226
Panel 15 Round 1 8,968
July-December 2010 13,785
Panel 14 Round 4 6,974
Panel 15 Round 2 6,811
January-June 2011 23,693
Panel 14 Round 5 6,845
Panel 15 Round 3 6,431
Panel 16 Round 1 10,417
July-December 2011 14,802
Panel 15 Round 4 6,254
Panel 16 Round 2 8,548
January-June 2012 24,247
Panel 15 Round 5 6,156
Panel 16 Round 3 8,160
Panel 17 Round 1 9,931
July-December 2012 16,161
Panel 16 Round 4 8,048
Panel 17 Round 2 8,113
January-June 2013 25,788
Panel 16 Round 5 7,969
Panel 17 Round 3 7,869
Panel 18 Round 1 9,950
July-December 2013 15,347
Panel 17 Round 4 7,656
Panel 18 Round 2 7,691
January-June 2014 24,857
Panel 17 Round 5 7,485
Panel 18 Round 3 7,402
Panel 19 Round 1 9,970
July-December 2014 14,665
Panel 18 Round 4 7,203
Panel 19 Round 2 7,462
January-June 2015 25,185
Panel 18 Round 5 7,163
Panel 19 Round 3 7,168
Panel 20 Round 1 10,854
July-December 2015 15,247
Panel 19 Round 4 6,946
Panel 20 Round 2 8,301
January-June 2016 24,694
Panel 19 Round 5 6,856
Panel 20 Round 3 7,987
Panel 21 Round 1 9,851
July-December 2016 15,390
Panel 20 Round 4 7,729
Panel 21 Round 2 7,661
January-June 2017 24,774
Panel 20 Round 5 7,611
Panel 21 Round 3 7,327
Panel 22 Round 1 9,835
July-December 2017 14,396
Panel 21 Round 4 7,025
Panel 22 Round 2 7,370
January-June 2018 223,573
Panel 21 Round 5 6,842
Panel 22 Round 3 6,892
Panel 23 Round 1 9,839
July-December 2018 13,766
Panel 22 Round 4 6,726
Panel 23 Round 2 7,040
January-June 2019 23,261
Panel 22 Round 5 6,624
Panel 23 Round 3 6,773
Panel 24 Round 1 9,864
July-December 2019 13,403
Panel 23 Round 4 6,569
Panel 24 Round 2 6,8348
January-June 2020 22,667
Panel 23 Round 5 6,413
Panel 24 Round 3 6,382
Panel 25 Round 1 9,872
July-December 2020 15,633
Panel 23 Round 6 5,264
Panel 24 Round 4 5,574
Panel 25 Round 2 4,795
January-June 2021 23,340
Panel 23 Round 7 4,624
Panel 24 Round 5 4,879
Panel 25 Round 3 4,328
Panel 26 Round 1 9,509
July-December 2021 16,828
Panel 23 Round 8 4,093
Panel 24 Round 6 4,048
Panel 25 Round 4 3,768
Panel 26 Round 2 4,919
January-June 2022 24,465
Panel 23 Round 9 3,673
Panel 24 Round 7 3,573
Panel 25 Round 5 3,339
Panel 26 Round 3 4,180
Panel 27 Round 1 9,700
July-December 2022 12,491
Panel 24 Round 8 3,174
Panel 26 Round 4 3,866
Panel 27 Round 2 5,451

* RU-level sample size for this table derived from field management system counts and operational reports detailing fielded sample.

Return To Table Of Contents

Table A-2. MEPS household survey data collection results, all panels*

Panel/Round Original sample Split cases (movers) Student cases Out-of-scope cases Net sample Completes Average interviewer hours/ complete Response rate (%)
Panel 1 Round 1 10,799 675 125 165 11,434 9,496 10.4 83.1
Round 2 9,485 310 74 101 9,768 9,239 8.7 94.6
Round 3 9,228 250 28 78 9,428 9,031 8.6 95.8
Round 4 9,019 261 33 89 9,224 8,487 8.5 92.0
Round 5 8,477 80 5 66 8,496 8,369 6.5 98.5
Panel 2 Round 1 6,461 431 71 151 6,812 5,660 12.9 83.1
Round 2 5,638 204 27 54 5,815 5,395 9.1 92.8
Round 3 5,382 166 15 52 5,511 5,296 8.5 96.1
Round 4 5,290 105 27 65 5,357 5,129 8.3 95.7
Round 5 5,127 38 2 56 5,111 5,049 6.7 98.8
Panel 3 Round 1 5,410 349 44 200 5,603 4,599 12.7 82.1
Round 2 4,581 106 25 39 4,673 4,388 8.3 93.9
Round 3 4,382 102 4 42 4,446 4,249 7.3 95.5
Round 4 4,243 86 17 33 4,313 4,184 6.7 97.0
Round 5 4,183 23 1 26 4,181 4,114 5.6 98.4
Panel 4 Round 1 7,103 371 64 134 7,404 5,948 10.9 80.3
Round 2 5,918 197 47 40 6,122 5,737 7.2 93.7
Round 3 5,731 145 10 39 5,847 5,574 6.9 95.3
Round 4 5,567 133 35 39 5,696 5,540 6.8 97.3
Round 5 5,547 52 4 47 5,556 5,500 6.0 99.0
Panel 5 Round 1 5,533 258 62 103 5,750 4,670 11.1 81.2
Round 2 4,655 119 27 27 4,774 4,510 7.7 94.5
Round 3 4,496 108 17 24 4,597 4,437 7.2 96.5
Round 4 4,426 117 20 41 4,522 4,396 7.0 97.2
Round 5 4,393 47 12 32 4,420 4,357 5.5 98.6
Panel 6 Round 1 11,026 595 135 200 11,556 9,382 10.8 81.2
Round 2 9,351 316 49 50 9,666 9,222 7.2 95.4
Round 3 9,183 215 23 41 9,380 9,001 6.5 96.0
Round 4 8,977 174 32 66 9,117 8,843 6.6 97.0
Round 5 8,830 94 14 46 8,892 8,781 5.6 98.8
Panel 7 Round 1 8,339 417 76 122 8,710 7,008 10.0 80.5
Round 2 6,991 190 40 24 7,197 6,802 7.2 94.5
Round 3 6,779 169 21 32 6,937 6,673 6.5 96.2
Round 4 6,655 133 17 34 6,771 6,593 7.0 97.4
Round 5 6,578 79 11 39 6,629 6,529 5.7 98.5
Panel 8 Round 1 8,706 441 73 175 9,045 7,177 10.0 79.3
Round 2 7,159 218 52 36 7,393 7,049 7.2 95.4
Round 3 7,035 150 13 33 7,165 6,892 6.5 96.2
Round 4 6,878 149 27 53 7,001 6,799 7.3 97.1
Round 5 6,795 71 8 41 6,833 6,726 6.0 98.4
Panel 9 Round 1 8,939 417 73 179 9,250 7,205 10.5 77.9
Round 2 7,190 237 40 40 7,427 7,027 7.7 94.6
Round 3 7,005 189 24 31 7,187 6,861 7.1 95.5
Round 4 6,843 142 23 44 6,964 6,716 7.4 96.5
Round 5 6,703 60 8 43 6,728 6,627 6.1 98.5
Panel 10 Round 1 8,748 430 77 169 9,086 7,175 11.0 79.0
Round 2 7,148 219 36 22 7,381 6,940 7.8 94.0
Round 3 6,921 156 10 31 7,056 6,727 6.8 95.3
Round 4 6,708 155 13 34 6,842 6,590 7.3 96.3
Round 5 6,596 55 9 38 6,622 6,461 6.2 97.6
Panel 11 Round 1 9,654 399 81 162 9,972 7,585 11.5 76.1
Round 2 7,572 244 42 24 7,834 7,276 7.8 92.9
Round 3 7,263 170 15 25 7,423 7,007 6.9 94.4
Round 4 7,005 139 14 36 7,122 6,898 7.2 96.9
Round 5 6,895 51 7 44 6,905 6,781 5.5 98.2
Panel 12 Round 1 7,467 331 86 172 7,712 5,901 14.2 76.5
Round 2 5,901 157 27 27 6,058 5,584 9.1 92.2
Round 3 5,580 105 13 12 5,686 5,383 8.1 94.7
Round 4 5,376 102 12 16 5,474 5,267 8.8 96.2
Round 5 5,261 50 8 21 5,298 5,182 6.4 97.8
Panel 13 Round 1 9,939 502 97 213 10,325 8,017 12.2 77.6
Round 2 8,008 220 47 23 8,252 7,809 9.0 94.6
Round 3 7,802 204 14 38 7,982 7,684 7.2 96.2
Round 4 7,670 162 17 40 7,809 7,576 7.5 97.0
Round 5 7,576 70 15 38 7,623 7,461 6.1 97.9
Panel 14 Round 1 9,899 394 74 140 10,227 7,650 12.3 74.8
Round 2 7,669 212 29 27 7,883 7,239 8.3 91.8
Round 3 7,226 144 23 34 7,359 6,980 7.3 94.9
Round 4 6,974 112 23 30 7,079 6,853 7.7 96.8
Round 5 6,845 55 9 30 6,879 6,761 6.2 98.3
Panel 15 Round 1 8,968 374 73 157 9,258 6,802 13.2 73.5
Round 2 6,811 171 19 21 6,980 6,435 8.9 92.2
Round 3 6,431 134 23 22 6,566 6,261 7.2 95.4
Round 4 6,254 116 15 26 6,359 6,165 7.8 97.0
Round 5 6,156 50 5 19 6,192 6,078 6.0 98.2
Panel 16 Round 1 10,417 504 98 555 10,940 8,553 11.4 78.2
Round 2 8,353 248 40 32 8,821 8,351 7.6 94.7
Round 3 8,160 223 19 27 8,375 8,236 6.4 96.1
Round 4 8,048 151 16 13 8,390 8,162 6.6 97.3
Round 5 7,969 66 13 25 8,198 7,998 5.5 97.6
Panel 17 Round 1 9,931 490 92 127 10,386 8,121 11.7 78.2
Round 2 8,113 230 35 19 8,359 7,874 7.9 94.2
Round 3 7,869 180 15 15 8,049 7,663 6.3 95.2
Round 4 7,656 199 19 30 7,844 7,494 7.4 95.5
Round 5 7,485 87 10 23 7,559 7,445 6.1 98.5
Panel 18 Round 1 9,950 435 83 111 10,357 7,683 12.3 74.2
Round 2 7,691 264 32 16 7,971 7,402 9.2 92.9
Round 3 7,402 235 21 22 7,635 7,213 7.6 94.5
Round 4 7,203 189 14 22 7,384 7,172 7.5 97.1
Round 5 7,163 94 12 15 7,254 7,138 6.2 98.4
Panel 19 Round 1 9,970 492 70 115 10,417 7,475 13.5 71.8
Round 2 7,460 222 23 24 7,681 7,188 8.4 93.6
Round 3 7,168 187 12 17 7,350 6,962 7.0 94.7
Round 4 6,946 146 20 23 7,089 6,858 7.4 96.7
Round 5 6,856 75 7 24 6,914 6,794 5.9 98.3
Panel 20 Round 1 10,854 496 85 117 11,318 8,318 12.5 73.5
Round 2 8,301 243 39 22 8,561 7,998 8.3 93.4
Round 3 7,987 173 17 26 8,151 7,753 6.8 95.1
Round 4 7,729 161 19 31 7,878 7,622 7.2 96.8
Round 5 7,611 99 13 23 7,700 7,421 6.0 96.4
Panel 21 Round 1 9,851 462 92 89 10,316 7,674 12.6 74.4
Round 2 7,661 207 32 17 7,883 7,327 8.5 93.0
Round 3 7,327 166 14 19 7,488 7,043 7.2 94.1
Round 4 7,025 119 14 20 7,138 6,907 7.0 96.8
Round 5 6,914 42 8 34 6,930 6,778 5.9 97.8
Panel 22 Round 1 9,835 352 68 86 10,169 7,381 12.8 72.6
Round 2 7,371 166 19 11 7,545 7,039 8.5 93.3
Round 3 7,071 100 12 19 7,164 6,808 6.7 95.0
Round 4 6,815 91 13 18 6,901 6,672 6.8 96.7
Round 5 6,670 35 7 12 6,700 6,584 5.3 98.3
Panel 23 Round 1 9,960 1,931 46 110 10,089 7,351 12.5 72.9
Round 2 7,387 106 14 15 7,492 6,960 8.2 92.9
Round 3 6,987 102 11 18 7,082 6,703 6.1 94.6
Round 4 6,704 74 10 12 6,776 6,522 6.6 96.2
Round 5 6,503 34 4 5 6,536 6,383 5.3 97.7
Round 6 6,498 90 10 18 6,480 5,120 4.8 79.0
Round 7 5,176 36 5 6 5,170 4,513 5.2 87.3
Round 8 4,558 27 3 10 4,548 3,984 5.8 87.6
Panel 24 Round 1 9,976 153 43 82 10,090 7,186 11.8 71.2
Round 2 7,211 98 19 5 7,323 6,777 7.9 92.5
Round 3 6,812 76 9 7 6,890 6,289 6.0 91.3
Round 4 6,335 44 4 13 6,370 5,446 5.1 85.5
Round 5 5,510 31 4 15 5,495 4,770 5.3 86.8
Round 6 4,816 22 8 8 4,808 3,959 5.7 82.3
Panel 25 Round 1 10,008 184 38 78 10,152 6,265 10.8 61.7
Round 2 5,907 49 14 12 5,958 4,677 5.5 78.5
Round 3 5,191 38 5 2 5,189 4,230 6.1 81.5
Round 4 4,314 40 10 7 4,307 3,685 7.3 85.6
Round 5 3,712 11 5 6 3,706 3,278 5.3 88.4
Panel 26 Round 1 9,674 160 29 68 9,795 5,882 11.1 60.1
Round 2 6,047 83 11 2 6,045 4,799 9.0 79.4
Round 3 4,882 42 4 6 4,876 4,103 6.8 84.1
Round 4 4,165 30 11 4 4,161 3,805 7.6 94.4
Round 5
Panel 27 Round 1 10,085 193 28 78 10,007 6,158 13.2 61.5
Round 2 6,288 68 11 3 6,285 5,368 8.9 85.4

* Figures in the table are weighted to reflect results of the interim nonresponse subsampling procedure implemented in the first round of Panel 16.

Return To Table Of Contents

Table A-3. Response rates by data collection year

Round 1 Round 2 Round 3 Round 4 Round 5 Round 6 Round 7 Round 8 Round 9
2010
Panel 15 73.5 92.2
Panel 14 94.9 96.8
Panel 13 97.9
2011
Panel 16 78.2 94.8
Panel 15 95.4 97.0
Panel 14 98.3
2012
Panel 17 78.2 94.2
Panel 16 96.1 97.3
Panel 15 98.2
2013
Panel 18 74.2 92.9
Panel 17 95.2 95.5
Panel 16 97.6
2014
Panel 19 71.8 93.6
Panel 18 94.5 97.1
Panel 17 98.5
2015
Panel 20 73.5 93.4
Panel 19 94.7 96.7
Panel 18 98.4
2016
Panel 21 74.4 93.0
Panel 20 95.1 96.8
Panel 19 98.3
2017
Panel 22 72.6 93.3
Panel 21 94.1 96.8
Panel 20 96.4
2018
Panel 23 72.9 92.9
Panel 22 95.0 96.7
Panel 21 97.8
2019
Panel 24 71.2 92.5
Panel 23 94.6 96.2
Panel 22 98.3
2020
Panel 25 61.7 78.5
Panel 24 91.3 85.5
Panel 23 97.7 79.0
2021
Panel 26 60.1 79.4
Panel 25 81.5 85.6
Panel 24 86.8 82.3
Panel 23 87.3 87.6
2022
Panel 27 61.5 85.4
Panel 26 84.1 91.4
Panel 25 88.6
Panel 24 87.5 88.7
Panel 23 90.2

Return To Table Of Contents

Table A-4. Summary of MEPS Round 1 response and nonresponse

2013
P18R1
2014
P19R1
2015
P20R1
2016
P21R1
2017
P22R1
2018
P23R1
2019
P24R1
2020
P25R1
2021
P26R1
2022
P27R1
Total sample 10,468 10,532 11,435 10,405 10,255 10,199 10,172 10,230 9,863 10,085
Out of scope (%) 1.1 1.1 1.0 0.9 0.8 1.1 0.8 0.8 0.7 0.8
Complete (%) 74.2 71.8 73.5 74.4 72.6 72.1 70.6 61.2 59.6 61.1
Nonresponse (%) 25.8 28.2 26.5 25.6 27.4 26.9 28.6 38.0 39.7 38.2
Refusal (%) 20.1 22.4 21.0 20.2 21.8 22.1 24.0 28.7 31.2 30.4
Not located (%) 4.3 4.2 4.3 3.7 3.9 3.1 3.1 3.2 4.3 3.3
Other nonresponse (%) 1.4 1.6 1.2 1.7 1.7 1.7 1.5 6.1 4.2 4.5

Return To Table Of Contents

Table A-5. Summary of Round 1 response by NHIS completion status

NHIS completion status 2013
P18R1
2014
P19R1
2015
P20R1
2016
P21R1
2017
P22R1
2018
P23R1
2019
P24R1
2020
P25R1
2021
P26R1
2022
P27R1
Original NHIS sample (N) 9,951 9,970 10,854 9,851 9,835 9,839 9,864 9,866 9,509 9,700
Percent complete in NHIS 78.1 81.9 80.6 77.6 81.0 80.4 84.2 89.3 85.3 83.3
Percent partial complete in NHIS 21.9 18.1 19.4 22.4 19.0 19.6 15.8 10.7 14.7 16.7
MEPS Round 1 response rate:
Percent complete for NHIS completes 76.9 74.5 75.9 77.3 75.4 75.4 73.5 63.5 63.1 64.2
Percent complete for NHIS partial completes 64.5 58.9 63.1 64.8 62.0 63.6 60.3 46.8 44.1 49.5

Note: Figures shown are based on original NHIS sample and exclude reporting units added to the sample as “splits” and “students.”

Return To Table Of Contents

Table A-6. Summary of MEPS Round 1 results for all RUs who ever refused

Panel Net sample (N) Ever refused (%) Converted (%) Final refusal rate (%) Final response rate (%)
Panel 15 9,258 29.4 26.6 21.0 73.5
Panel 16 10,940 26.3 30.9 17.6 78.2
Panel 17 10,386 25.3 30.2 17.2 78.2
Panel 18 10,357 25.5 25.0 18.1 74.2
Panel 19 10,418 30.1 23.3 22.4 71.8
Panel 20 11,318 30.1 29.2 21.0 73.5
Panel 21 10,316 29.1 29.0 20.2 74.4
Panel 22 10,169 30.1 27.6 21.8 72.6
Panel 23 10,089 31.3 25.6 22.4 72.9
Panel 24 10,090 32.6 23.4 24.2 71.2
Panel 25 10,152 34.8 12.3 28.9 61.7
Panel 26 9,795 40.4 19.3 31.4 60.0
Panel 27 10,007 37.7 14.8 30.6 61.5

Return To Table Of Contents

Table A-7. Summary of MEPS Round 1 results for RUs who were ever traced, Panels 15-27

Panel Total sample (N) Ever traced (%) Not located (%)
Panel 15 9,415 16.7 4.1
Panel 16 11,019 18.2 3.0
Panel 17 10,513 18.7 3.6
Panel 18 10,468 16.0 4.3
Panel 19 10,532 19.5 4.1
Panel 20 11,435 14.0 4.3
Panel 21 10,405 12.8 3.7
Panel 22 10,228 13.0 3.9
Panel 23 10,199 12.7 3.0
Panel 24 10,172 12.6 3.0
Panel 25 10,230 11.7 3.2
Panel 26 9,863 11.3 4.3
Panel 27 10,085 11.0 3.3

Return To Table Of Contents

Table A-8. Interview timing comparison (mean minutes per interview, single-session interviews)

Round Panel 16 Panel 17 Panel 18 Panel 19 Panel 20 Panel 21 Panel 22 Panel 23 Panel 24 Panel 25 Panel 26 Panel 27
Round 1 74.0 67.8 78.0 85.5 76.4 75.5 79.9 78.1 79.5 89.0 92.9 82.3
Round 2 88.1 90.2 102.9 92.3 86.3 85.3 88.8 88.2 87.0 89.7 93.3 79.3
Round 3 87.2 94.3 103.1 94.5 89.7 93.4 93.0 92.6 98.5 100.0 76.5
Round 4 85.9 99.6 89.0 84.6 80.5 82.7 84.3 86.8 86.2 93.2
Round 5 85.4 92.2 87.4 84.1 85.3 76.0 78.8 78.7 97.1 75.5
Round 6 88.4 89.7
Round 7 96.6 85.4
Round 8 90.1 78.5
Round 9 76.5

Return To Table Of Contents

Table A-9. Mean contact attempts by NHIS completion status, Round 1

Contact type Panel 20, Round 1 Panel 21, Round 1 Panel 22, Round 1 Panel 23, Round 1 Panel 24, Round 1 Panel 25, Round 1 Panel 26, Round 1 Panel 27, Round 1
All RUs Complete Partial All RUs Complete Partial All RUs Complete Partial All RUs Complete Partial All RUs Complete Partial All RUs Complete Partial All RUs Complete Partial All RUs Complete Partial
N 10,854 8,751 2,103 9,851 7,645 2,206 9,835 7,963 1,872 9,839 7,913 1,926 9,864 8,306 1,558 9,866 8,814 1,052 9,509 8,113 1,396 9,700 8,077 1,623
% of all RUs 100 81.0 19.0 100 77.6 22.4 100 81.0 19.0 100 80.4 19.6 100 84.2 15.8 100 89.3 10.7 100 85.3 14.7 100 83.3 16.7
In-person 7.2 6.9 8.5 7.0 6.9 8.3 6.3 6.1 7.3 6.2 6.0 7.2 5.5 5.4 6.3 2.6 2.5 2.6 2.4 2.3 3.1 5.6 6.1 5.7
Telephone 2.1 2.0 2.5 2.0 1.9 2.4 1.5 1.5 1.7 1.5 1.4 1.7 1.3 1.2 1.6 9.7 9.5 11.6 8.8 8.7 9.8 8.7 8.7 9.4
CAVI - - - - - - - - - - - - - - - - - - - - - 10.6 10.6 11.3
Total 9.6 9.2 11.4 9.3 8.9 11.0 8.4 8.1 9.6 8.2 7.9 9.5 7.3 7.1 8.5 14.4 14.1 17.0 13.1 12.8 14.9 8.4 8.2 9.3

Return To Table Of Contents

Table A-10 Signing rates for medical provider authorization forms

Panel/round Signature method Authorization forms requested Authorization forms signed Signing rate (%)
Panel 1 Round 1 3,562 2,624 73.7
Round 2 19,874 14,145 71.2
Round 3 17,722 12,062 68.1
Round 4 17,133 10,542 61.5
Round 5 12,544 6,763 53.9
Panel 2 Round 1 2,735 1,788 65.4
Round 2 13,461 9,433 70.1
Round 3 11,901 7,537 63.3
Round 4 11,164 6,485 58.1
Round 5 8,104 4,244 52.4
Panel 3 Round 1 2,078 1,349 64.9
Round 2 10,335 6,463 62.5
Round 3 8,716 4,797 55.0
Round 4 8,761 4,246 48.5
Round 5 6,913 2,911 42.1
Panel 4 Round 1 2,400 1,607 67.0
Round 2 12,711 8,434 66.4
Round 3 11,078 6,642 60.0
Round 4 11,047 6,888 62.4
Round 5 8,684 5,096 58.7
Panel 5 Round 1 1,243 834 67.1
Round 2 14,008 9,618 68.7
Round 3 12,869 8,301 64.5
Round 4 13,464 9,170 68.1
Round 5 10,888 7,025 64.5
Panel 6 Round 1 2,783 2,012 72.3
Round 2 29,861 22,872 76.6
Round 3 26,068 18,219 69.9
Round 4 27,146 20,082 74.0
Round 5 21,022 14,581 69.4
Panel 7 Round 1 2,298 1,723 75.0
Round 2 22,302 17,557 78.7
Round 3 19,312 13,896 72.0
Round 4 16,934 13,725 81.1
Round 5 14,577 11,099 76.1
Panel 8 Round 1 2,287 1,773 77.5
Round 2 22,533 17,802 79.0
Round 3 19,530 14,064 72.0
Round 4 19,718 14,599 74.0
Round 5 15,856 11,106 70.0
Panel 9 Round 1 2,253 1,681 74.6
Round 2 22,668 17,522 77.3
Round 3 19,601 13,672 69.8
Round 4 20,147 14,527 72.1
Round 5 15,963 10,720 67.2
Panel 10 Round 1 2,068 1,443 69.8
Round 2 22,582 17,090 75.7
Round 3 18,967 13,396 70.6
Round 4 19,087 13,296 69.7
Round 5 15,787 10,476 66.4
Panel 11 Round 1 2,154 1,498 69.5
Round 2 23,957 17,742 74.1
Round 3 20,756 13,400 64.6
Round 4 21,260 14,808 69.7
Round 5 16,793 11,482 68.4
Panel 12 Round 1 1,695 1,066 62.9
Round 2 17,787 12,524 70.4
Round 3 15,291 10,006 65.4
Round 4 15,692 10,717 68.3
Round 5 12,780 8,367 65.5
Panel 13 Round 1 2,217 1,603 72.3
Round 2 24,357 18,566 76.2
Round 3 21,058 14,826 70.4
Round 4 21,673 15,632 72.1
Round 5 17,158 11,779 68.7
Panel 14 Round 1 2,128 1,498 70.4
Round 2 23,138 17,739 76.7
Round 3 19,024 13,673 71.9
Round 4 18,532 12,824 69.2
Round 5 15,444 10,201 66.1
Panel 15 Round 1 1,680 1,136 67.6
Round 2 18,506 13,628 73.6
Round 3 16,686 11,652 69.8
Round 4 16,260 11,139 68.5
Round 5 13,443 8,420 62.6
Panel 16 Round 1 1,811 1,223 67.5
Round 2 23,718 17,566 74.1
Round 3 21,780 14,828 68.1
Round 4 21,537 16,329 75.8
Round 5 16,688 12,028 72.1
Panel 17 Round 1 1,655 1,117 67.5
Round 2 21,749 17,694 81.4
Round 3 19,292 15,125 78.4
Round 4 20,086 15,691 78.1
Round 5 15,064 11,873 78.8
Panel 18 Round 1 1,677 1,266 75.5
Round 2 22,714 18,043 79.4
Round 3 20,728 15,827 76.4
Round 4 17,092 13,704 80.2
Round 5 15,448 11,796 76.4
Panel 19 Round 1 2,189 1,480 67.6
Round 2 22,671 17,190 75.8
Round 3 20,582 14,534 70.6
Round 4 17,102 13,254 77.5
Round 5 15,330 11,425 74.5
Panel 20 Round 1 2,354 1,603 68.1
Round 2 25,334 18,479 72.9
Round 3 22,851 15,862 69.4
Round 4 18,234 14,026 76.9
Round 5 16,274 12,100 74.4
Panel 21 Round 1 2,037 1,396 68.5
Round 2 22,984 17,295 75.2
Round 3 20,802 14,898 71.6
Round 4 16,487 13,110 79.5
Round 5 20,443 16,247 79.5
Panel 22 Round 1 2,274 1,573 69.2
Round 2 22,913 17,530 76.5
Round 3 26,436 19,496 73.7
Round 4 23,249 18,097 77.8
Round 5 17,171 12,168 70.9
Panel 23 Round 1 1,982 1,533 77.3
Round 2 29,576 21,850 73.9
Round 3 23,365 14,475 62.4
Round 4 19,220 13,483 70.2
Round 5 17,569 10,903 62.1
Round 6 12,701 8,002 63.0
Round 7 13,254 8,108 61.2
Round 8 11,589 7,624 65.8
Round 9 eSignature 597 542 90.8
DocuSign 5,867 4,528 77.2
Paper 2,601 1,172 45.1
Combined 9,065 6,242 68.9
Panel 24 Round 1 2,285 1,306 57.2
Round 2 24,755 15,865 64.1
Round 3 22,657 11,522 50.9
Round 4 14,612 7,716 52.8
Round 5 15,992 8,941 55.9
Round 6 11,366 6,658 58.6
Round 7 eSignature 860 799 92.9
DocuSign 6,856 4,997 72.9
Paper 3,032 1,254 41.4
Combined 10,748 7,050 65.6
Round 8 eSignature 1,121 1,055 94.1
DocuSign 4,997 3,500 70.0
Paper 1,625 661 40.7
Combined 7,743 5,216 67.4
Panel 25 Round 1 3,110 1,242 39.9
Round 2 15,259 7,292 47.8
Round 3 15,932 8,100 50.8
Round 4 11,252 7,204 64.0
Round 5 eSignature 3,796 3,570 94.0
DocuSign 3,336 2,339 70.1
Paper 1,877 431 23.0
Combined 9,009 6,340 70.4
Panel 26 Round 1 2,432 1,151 47.3
Round 2 17,765 10,564 59.5
Round 3 eSignature 7,510 7,043 93.8
DocuSign 4,668 2,980 63.8
Paper 2,964 419 14.1
Combined 15,142 10,442 69.0
Round 4 eSignature 6,494 6,295 95.4
DocuSign 2,544 1,420 55.8
Paper 1,351 184 13.6
Combined 10,389 7,799 75.1
Panel 27 Round 1 eSignature 1,222 1,147 93.9
DocuSign 523 285 54.5
Paper 477 39 8.2
Combined 2,222 1,471 66.2
Round 2 eSignature 10,831 10,286 95.0
DocuSign 4,744 2,026 42.7
Paper 2,855 192 6.7
Combined 18,430 12,504 67.8

Return To Table Of Contents

Table A-11 Signing rates for pharmacy authorization forms

Panel/round Signature method Permission forms requested Permission forms signed Signing rate (%)
Panel 1 Round 3 19,913 14,468 72.7
Round 5 8,685 6,002 69.1
Panel 2 Round 3 12,241 8,694 71.0
Round 5 8,640 6,297 72.9
Panel 3 Round 3 9,016 5,929 65.8
Round 5 7,569 5,200 68.7
Panel 4 Round 3 11,856 8,280 69.8
Round 5 10,688 8,318 77.8
Panel 5 Round 3 9,248 6,852 74.1
Round 5 8,955 7,174 80.1
Panel 6 Round 3 19,305 15,313 79.3
Round 5 17,981 14,864 82.7
Panel 7 Round 3 14,456 11,611 80.3
Round 5 13,428 11,210 83.5
Panel 8 Round 3 14,391 11,533 80.1
Round 5 13,422 11,049 82.3
Panel 9 Round 3 14,334 11,189 78.1
Round 5 13,416 10,893 81.2
Panel 10 Round 3 13,928 10,706 76.9
Round 5 12,869 10,260 79.7
Panel 11 Round 3 14,937 11,328 75.8
Round 5 13,778 11,332 82.3
Panel 12 Round 3 10,840 8,242 76.0
Round 5 9,930 8,015 80.7
Panel 13 Round 3 15,379 12,165 79.1
Round 4 10,782 7,795 72.3
Round 5 9,451 6,635 70.2
Panel 14 Round 2 11,841 9,151 77.3
Round 3 9,686 7,091 73.2
Round 4 9,298 6,623 71.2
Round 5 8,415 6,011 71.4
Panel 15 Round 2 9,698 7,092 73.1
Round 3 8,684 6,189 71.3
Round 4 8,163 5,756 70.5
Round 5 7,302 4,485 66.9
Panel 16 Round 2 12,093 8,892 73.5
Round 3 10,959 7,591 69.3
Round 4 10,432 8,194 78.6
Round 5 8,990 6,928 77.1
Panel 17 Round 2 14,181 12,567 88.6
Round 3 9,715 7,580 78.0
Round 4 9,759 7,730 79.2
Round 5 8,245 6,604 80.1
Panel 18 Round 2 10,977 8,755 79.8
Round 3 9,757 7,573 77.6
Round 4 8,526 6,858 80.4
Round 5 7,918 6,173 78.0
Panel 19 Round 2 10,749 8,261 76.9
Round 3 9,618 6,902 71.8
Round 4 8,557 6,579 76.9
Round 5 7,767 5,905 76.0
Panel 20 Round 2 12,074 8,796 72.9
Round 3 10,577 7,432 70.3
Round 4 9,0994 6,945 76.3
Round 5 8,312 6,339 76.3
Panel 21 Round 2 10,783 7,985 74.1
Round 3 9,540 6,847 71.8
Round 4 8,172 6,387 78.2
Round 5 6,684 5,336 79.8
Panel 22 Round 2 10,510 7,919 75.4
Round 3 8,053 5,953 73.9
Round 4 7,284 5,670 77.8
Round 5 5,726 71.1
Panel 23 Round 2 8,834 6,514 73.8
Round 3 9,614 6,205 64.5
Round 4 8,486 5,900 69.5
Round 5 8,067 5,101 63.2
Round 6 5,668 3,418 60.3
Round 7 5,417 3,345 61.8
Round 8 5,182 3,341 64.5
Round 9 eSignature 303 269 88.8
DocuSign 2,587 1,983 76.7
Paper 1,240 563 45.4
Combined 4,130 2,815 68.2
Panel 24 Round 2 10,265 6,676 65.0
Round 3 9,096 4,831 53.1
Round 4 7,100 3,636 51.2
Round 5 6,528 3,682 56.4
Round 6 4,783 2,663 55.7
Round 7 eSignature 336 310 92.3
DocuSign 2,763 2,073 75.0
Paper 1,279 547 42.8
Combined 4,378 2,930 66.9
Round 8 eSignature 480 449 93.5
DocuSign 2,238 1,527 68.2
Paper 798 299 37.5
Combined 3,516 2,275 64.7
Panel 25 Round 2 6,783 3,180 46.9
Round 3 6,114 3,146 51.5
Round 4 4,640 2,888 62.2
Round 5 eSignature 1,667 1,572 94.3
DocuSign 1,416 983 69.4
Paper 787 181 23.0
Combined 3,870 2,736 70.7
Panel 26 Round 2 6,961 4,105 59.0
Round 3 eSignature 2,916 2,725 93.4
DocuSign 1,749 1,121 64.1
Paper 1,156 181 15.7
Combined 5,821 4,027 69.2
Round 4 eSignature 2,848 2,710 95.2
DocuSign 1,212 652 53.8
Paper 659 60 9.1
Combined 4,719 3,422 72.5
Panel 27 Round 2 eSignature 4,412 4,178 94.7
DocuSign 1,972 842 42.7
Paper 1,272 73 5.7
Combined 7,656 5,093 66.5

Return To Table Of Contents

Table A-12 Results of Self-Administered Questionnaire (SAQ) collection*

Panel/Round SAQs requested SAQs completed SAQs refused Other nonresponse Response rate (%)
Panel 1 Round 2 16,577 9,910 - - 59.8
Round 3 6,032 1,469 840 3,723 24.3
Combined, 1996 16,577 11,379 - - 68.6
Panel 4* Round 4 13,936 12,265 288 1,367 87.9
Round 5 1,683 947 314 422 56.3
Combined, 2000 13,936 13,212 - - 94.8
Panel 5* Round 2 11,239 9,833 191 1,213 86.9
Round 3 1,314 717 180 417 54.6
Combined, 2000 11,239 10,550 - - 93.9
Round 4 7,812 6,790 198 824 86.9
Round 5 1,022 483 182 357 47.3
Combined, 2001 7,812 7,273 - - 93.1
Panel 6 Round 2 16,577 14,233 412 1,932 85.9
Round 3 2,143 1,213 230 700 56.6
Combined, 2001 16,577 15,446 - - 93.2
Round 4 15,687 13,898 362 1,427 88.6
Round 5 1,852 967 377 508 52.2
Combined, 2002 15,687 14,865 - - 94.8
Panel 7 Round 2 12,093 10,478 196 1,419 86.6
Round 3 1,559 894 206 459 57.3
Combined, 2002 12,093 11,372 - - 94.0
Round 4 11,703 10,125 285 1,292 86.5
Round 5 1,493 786 273 434 52.7
Combined, 2003 11,703 10,911 - - 93.2
Panel 8 Round 2 12,533 10,765 203 1,565 85.9
Round 3 1,568 846 234 488 54.0
Combined, 2003 12,533 11,611 - - 92.6
Round 4 11,996 10,534 357 1,105 87.8
Round 5 1,400 675 344 381 48.2
Combined, 2004 11,996 11,209 - - 93.4
Panel 9 Round 2 12,541 10,631 381 1,529 84.8
Round 3 1,670 886 287 496 53.1
Combined, 2004 12,541 11,517 - - 91.9
Round 4 11,913 10,357 379 1,177 86.9
Round 5 1,478 751 324 403 50.8
Combined, 2005 11,913 11,108 - - 93.2
Panel 10 Round 2 12,360 10,503 391 1,466 85.0
Round 3 1,626 787 280 559 48.4
Combined, 2005 12,360 11,290 - - 91.3
Round 4 11,726 10,081 415 1,230 86.0
Round 5 1,516 696 417 403 45.9
Combined, 2006 11,726 10,777 - - 91.9
Panel 11 Round 2 13,146 10,924 452 1,770 83.1
Round 3 1,908 948 349 611 49.7
Combined, 2006 13,146 11,872 - - 90.3
Round 4 12,479 10,771 622 1,086 86.3
Round 5 1,621 790 539 292 48.7
Combined, 2007 12,479 11,561 - - 92.6
Panel 12 Round 2 10,061 8,419 502 1,140 83.7
Round 3 1,460 711 402 347 48.7
Combined, 2007 10,061 9,130 - - 90.7
Round 4 9,550 8,303 577 670 86.9
Round 5 1,145 541 415 189 47.3
Combined, 2008 9,550 8,844 - - 92.6
Panel 13 Round 2 14,410 12,541 707 1,162 87.0
Round 3 1,630 829 439 362 50.9
Combined, 2008 14,410 13,370 - - 92.8
Round 4 13,822 12,311 559 952 89.1
Round 5 1,364 635 476 253 46.6
Combined, 2009 13,822 12,946 - - 93.7
Panel 14 Round 2 13,335 11,528 616 1,191 86.5
Round 3 1,542 818 426 298 53.1
Combined, 2009 13,335 12,346 - - 92.6
Round 4 12,527 11,041 644 839 88.1
Round 5 1,403 645 497 261 46.0
Combined, 2010 12,527 11,686 - - 93.3
Panel 15 Round 2 11,857 10,121 637 1,096 85.4
Round 3 1,491 725 425 341 48.6
Combined, 2010 11,857 10,846 - - 91.5
Round 4 11,311 9,804 572 935 86.7
Round 5 1,418 678 461 279 47.8
Combined, 2011 11,311 10,482 - - 92.6
Panel 16 Round 2 15,026 12,926 707 1393 86.0
Round 3 1,863 949 465 449 50.9
Combined, 2011 15,026 13,875 - - 92.3
Round 4 13,620 12,415 582 623 91.2
Round 5 1,112 516 442 154 46.4
Combined, 2012 13,620 12,931 - - 94.9
Panel 17 Round 2 14,181 12,567 677 937 88.6
Round 3 1,395 690 417 288 49.5
Combined, 2012 14,181 13,257 - - 93.5
Round 4 13,086 11,566 602 918 88.4
Round 5 1,429 655 504 270 45.8
Combined, 2013 13,086 12,221 - - 93.4
Panel 18 Round 2 13,158 10,805 785 1,568 82.1
Round 3 2,066 1,022 547 497 48.5
Combined, 2013 13,158 11,827 - - 89.9
Round 4 12,243 10,050 916 1,277 82.1
Round 5 2,063 936 721 406 45.4
Combined, 2014 12,243 10,986 - - 89.7
Panel 19 Round 2 12,664 10,047 1,014 1,603 79.3
Round 3 2,306 1,050 694 615 44.5
Combined, 2014 12,664 11,097 - - 87.6
Round 4 11,782 9,542 1,047 1,175 81.0
Round 5 2,131 894 822 414 42.0
Combined, 2015 11,782 10,436 - - 88.6
Panel 20 Round 2 14,077 10,885 1,223 1,966 77.3
Round 3 2,899 1,329 921 649 45.8
Combined, 2015 14,077 12,214 - - 86.8
Round 4 13,068 10,572 1,127 1,371 80.9
Round 5 2,262 1,001 891 370 44.3
Combined, 2016 13,068 11,573 - - 88.6
Panel 21 Round 2 13,143 10,212 1,170 1,761 77.7
Round 3 2,585 1,123 893 569 43.4
Combined, 2016 13,143 11,335 - - 86.2
Round 4 12,021 9,966 1,149 906 82.9
Round 5 2,078 834 884 360 40.1
Combined, 2017 12,021 10,800 - - 89.8
Panel 22 Round 2 12,304 9,929 1,086 1,289 80.7
Round 3 2,287 840 749 698 36.7
Combined, 2017 12,304 10,769 - - 87.5
Round 4 11,333 8,341 1,159 1,833 73.6
Round 5 2,090 811 896 383 38.8
Combined, 2018 11,333 9,152 - - 80.8
Panel 23 Round 2 12,349 8,711 1,364 1,289 70.5
Round 3 2,364 819 907 638 34.6
Combined, 2018 12,369 9,530 - - 77.2
Round 4 11,290 8,554 1,515 1,221 75.8
Round 5 2,711 983 923 805 36.3
Combined, 2019 11,290 9,537 - - 84.5
Round 6 8,537 4,732 682 3,123 55.4
Round 7 3,229 1,123 707 1,399 34.8
Combined, 2020 8,537 5,855 - - 68.6
Round 8 6,446 3,377 799 2,270 52.4
Round 9 2,654 724 633 1,297 27.3
Combined, 2021 6,446 4,101 - - 63.6
Panel 24 Round 2 12,027 8,726 1,641 1,660 72.6
Round 3 2,810 860 832 1,118 30.6
Combined, 2019 12,027 9,586 - - 79.7
Round 4 9,257 4,247 786 4,224 45.9
Round 5 4,224 1,476 838 1,910 34.9
Combined, 2020 9,257 5,723 - - 61.8
Round 6 6,440 3,196 819 2,425 49.6
Round 7 2,695 696 628 1,371 25.8
Combined, 2021 6.440 3,892 - - 60.4
Round 8 4,906 2,347 634 1,925 47.8
Panel 25 Round 2 8,109 3,555 529 4,025 43.8
Round 3 4,016 1,322 717 1,977 32.9
Combined, 2020 8,109 4,877 - - 60.1
Round 4 6,089 3,309 850 1,930 54.3
Round 5 2,325 655 583 1,087 28.2
Combined, 2021 6,089 3,964 - - 65.1
Panel 26 Round 2 8,419 4,609 1,009 2,801 54.7
Round 3 2,950 853 732 1,365 28.9
Combined, 2021 8,419 5,462 - - 64.9
Round 4 6,370 3,399 898 2,073 53.4
Panel 27 Round 2 9,690 4,669 1,529 3,492 48.2

* Totals represent combined collection of the SAQ and the parent-administered questionnaire (PAQ).

Return To Table Of Contents

Table A-13 Results of Diabetes Care Supplement (DCS) collection*

Panel/Round DCSs requested DCSs completed Response rate (%)
Panel 4 Round 5 696 631 90.7
Panel 5 Round 3 550 508 92.4
Round 5 570 500 87.7
Panel 6 Round 3 1,166 1,000 85.8
Round 5 1,202 1,166 97.0
Panel 7 Round 3 870 848 97.5
Round 5 869 820 94.4
Panel 8 Round 3 971 885 91.1
Round 5 977 894 91.5
Panel 9 Round 3 1,003 909 90.6
Round 5 904 806 89.2
Panel 10 Round 3 1,060 939 88.6
Round 5 1,078 965 89.5
Panel 11 Round 3 1,188 1,030 86.7
Round 5 1,182 1,053 89.1
Panel 12 Round 3 917 825 90.0
Round 5 883 815 92.3
Panel 13 Round 3 1,278 1,182 92.5
Round 5 1,278 1,154 90.3
Panel 14 Round 3 1,174 1,048 89.3
Round 5 1,177 1,066 90.6
Panel 15 Round 3 1,117 1,000 89.5
Round 5 1,097 990 90.3
Panel 16 Round 3 1,425 1,283 90.0
Round 5 1,358 1,256 92.5
Panel 17 Round 3 1,315 1,177 89.5
Round 5 1,308 1,174 89.8
Panel 18 Round 3 1,362 1,182 86.8
Round 5 1,342 1,187 88.5
Panel 19 Round 3 1,272 1,124 88.4
Round 5 1,316 1,144 87.2
Panel 20 Round 3 1,412 1,190 84.5
Round 5 1,386 1,174 84.9
Panel 21 Round 3 1,422 1,170 82.5
Round 5 1,481 1,212 81.8
Panel 22 Round 3 1,453 1,177 81.0
Round 5 1,348 1,018 75.5
Panel 23 Round 3 1,464 1,101 75.2
Round 5 1,350 933 69.1
Round 7 1,018 648 63.7
Round 9 813 446 54.9
Panel 24 Round 3 1,350 843 62.4
Round 5 1,082 599 55.4
Round 7 817 443 54.2
Panel 25 Round 3 963 514 53.4
Round 5 758 419 55.3
Panel 26 Round 3 894 516 57.7

* Tables represent combined DCS/proxy DCS collection.

Return To Table Of Contents

Table A-14. Results of patient profile collection

Pharmacy Total number Total received Percent received Total complete Completes as a percent of total
2019 – P22R5 all mail collection
Total RUs 921 173 18.8% 125 13.6%
Total Pairs 1,387 199 14.3% 183 13.2%
2018 – P21R5 all mail collection
Total RUs 2,920 417 20.7% 316 15.6%
Total Pairs 4,116 486 16.6% 425 14.5%
2017 – P20R5 all mail collection
Total RUs 1,953 342 17.5% 254 13.0%
Total Pairs 2,723 372 13.7% 326 12.0%
2016 – P19R5 all mail collection
Total RUs 2,038 374 18.4% 285 14.0%
Total Pairs 2,854 430 15.1% 394 13.8%
2015 – P18R5 all mail collection
Total RUs 1,404 260 18.5% 186 13.2%
Total Pairs 2,042 289 14.2% 255 12.5%
2014 – P17R5 all mail collection
Total RUs 2,230 372 16.7% 269 12.1%
Total Pairs 3,233 443 13.7% 386 11.9%
2013 – P16R5 all mail collection
Total RUs 2,014 417 20.7% 316 15.6%
Total Pairs 2,911 486 16.6% 425 14.5%
2012 – P15R5 all mail collection
Total RUs 1,390 290 20.8% 203 14.6%
Total Pairs 1,990 348 17.4% 290 14.5%

Return To Table Of Contents

Table A-15. Calls to respondent information line

Reason for call Spring 2000 (Panel 5 Round 1, Panel 4 Round 3, Panel 3 Round 5) Fall 2000 (Panel 5 Round 2, Panel 4 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address change 23 4.0 13 8.3 8 5.7
Appointment 37 6.5 26 16.7 28 19.9
Request callback 146 25.7 58 37.2 69 48.9
Refusal 183 32.2 20 12.8 12 8.5
Willing to participate 10 1.8 2 1.3 0 0.0
Other 157 27.6 35 22.4 8 5.7
Report a respondent deceased 5 0.9 1 0.6 0 0.0
Request a Spanish-speaking interview 8 1.4 1 0.6 0 0.0
Request SAQ help 0 0.0 0 0.0 16 11.3
Total 569 156 141

Reason for call Spring 2001 (Panel 6 Round 1, Panel 5 Round 3, Panel 4 Round 5) Fall 2001 (Panel 6 Round 2, Panel 5 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 27 3.7 17 12.7 56 15.7
Appointment 119 16.2 56 41.8 134 37.5
Request callback 259 35.3 36 26.9 92 25.8
No message 8 1.1 3 2.2 0 0.0
Other 29 4.0 7 5.2 31 8.7
Request SAQ help 0 0.0 2 1.5 10 2.8
Special needs 5 0.7 3 2.2 0 0.0
Refusal 278 37.9 10 7.5 25 7.0
Willing to participate 8 1.1 0 0.0 9 2.5
Total 733 134 357

Reason for call Spring 2002 (Panel 7 Round 1, Panel 6 Round 3, Panel 5 Round 5) Fall 2002 (Panel 7 Round 2, Panel 6 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 28 4.5 29 13.9 66 16.7
Appointment 77 12.5 71 34.1 147 37.1
Request callback 210 34.0 69 33.2 99 25.0
No message 6 1.0 3 1.4 5 1.3
Other 41 6.6 17 8.2 10 2.5
Request SAQ help 0 0.0 0 0.0 30 7.6
Special needs 1 0.2 0 0.0 3 0.8
Refusal 232 37.6 14 6.7 29 7.3
Willing to participate 22 3.6 5 2.4 7 1.8
Total 617 208 396

Reason for call Spring 2003 (Panel 8 Round 1, Panel 7 Round 3, Panel 6 Round 5) Fall 2003 (Panel 8 Round 2, Panel 7 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 20 4.2 33 13.7 42 17.9
Appointment 83 17.5 87 36.1 79 33.8
Request callback 165 34.9 100 41.5 97 41.5
No message 16 3.4 7 2.9 6 2.6
Other 9 1.9 8 3.3 3 1.3
Request SAQ help 0 0.0 0 0.0 1 0.4
Special needs 5 1.1 0 0.0 0 0.0
Refusal 158 33.4 6 2.5 6 2.6
Willing to participate 17 3.6 0 0.0 0 0.0
Total 473 241 234

Reason for call Spring 2004 (Panel 9 Round 1, Panel 8 Round 3, Panel 7 Round 5) Fall 2004 (Panel 9 Round 2, Panel 8 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 8 1.6 26 13.2 42 10.9
Appointment 67 13.3 76 38.6 153 39.7
Request callback 158 31.5 77 39.1 139 36.1
No message 9 1.8 5 2.5 16 4.2
Other 8 1.6 5 2.5 5 1.3
Proxy needed 5 1.0 2 1.0 0 0.0
Request SAQ help 0 0.0 0 0.0 2 0.5
Special needs 0 0.0 0 0.0 0 0.0
Refusal 228 45.4 6 3.0 27 7.0
Willing to participate 19 3.8 0 0.0 1 0.3
Total 502 197 385

Reason for call Spring 2005 (Panel 10 Round 1, Panel 9 Round 3, Panel 8 Round 5) Fall 2005 (Panel 10 Round 2, Panel 9 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 16 3.3 23 8.7 27 6.8
Appointment 77 15.7 117 44.3 177 44.4
Request callback 154 31.4 88 33.3 126 31.6
No message 14 2.9 11 4.2 28 7.0
Other 13 2.7 1 0.4 8 2.0
Proxy needed 0 0.0 0 0.0 0 0.0
Request SAQ help 0 0.0 0 0.0 1 0.3
Special needs 1 0.2 1 0.4 0 0.0
Refusal 195 39.8 20 7.6 30 7.5
Willing to participate 20 4.1 3 1.1 2 0.5
Total 490 264 399

Reason for call Spring 2006 (Panel 11 Round 1, Panel 10 Round 3, Panel 9 Round 5) Fall 2006 (Panel 11 Round 2, Panel 10 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 7 1.3 24 7.5 11 4.1
Appointment 61 11.3 124 39.0 103 38.1
Request callback 146 27.1 96 30.2 101 37.4
No message 72 13.4 46 14.5 21 7.8
Other 16 3.0 12 3.8 8 3.0
Proxy needed 0 0.0 0 0.0 0 0.0
Request SAQ help 0 0.0 0 0.0 0 0.0
Special needs 4 0.7 0 0.0 0 0.0
Refusal 216 40.1 15 4.7 26 9.6
Willing to participate 17 3.2 1 0.3 0 0.0
Total 539 318 270

Reason for call Spring 2007 (Panel 12 Round 1, Panel 11 Round 3, Panel 10 Round 5) Fall 2007 (Panel 12 Round 2, Panel 11 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 8 2.1 21 7.3 23 7.6
Appointment 56 14.6 129 44.8 129 42.6
Request callback 72 18.8 75 26.0 88 29.0
No message 56 14.6 37 12.8 33 10.9
Other 20 5.2 15 5.2 6 2.0
Proxy needed 0 0.0 0 0.0 0 0.0
Request SAQ help 0 0.0 0 0.0 0 0.0
Special needs 5 1.3 0 0.0 1 0.3
Refusal 160 41.8 10 3.5 21 6.9
Willing to participate 6 1.6 1 0.3 2 0.7
Total 383 288 303

Reason for call Spring 2008 (Panel 13 Round 1, Panel 12 Round 3, Panel 11 Round 5) Fall 2008 (Panel 13 Round 2, Panel 12 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 20 3.4 12 4.7 21 5.7
Appointment 92 15.5 117 45.9 148 39.9
Request callback 164 27.6 81 31.8 154 41.5
No message 82 13.8 20 7.8 22 5.9
Other 13 2.2 12 4.7 8 2.2
Proxy needed 0 0.0 0 0.0 0 0.0
Request SAQ help 0 0.0 0 0.0 0 0.0
Special needs 4 0.7 0 0.0 0 0.0
Refusal 196 32.9 13 5.1 18 4.9
Willing to participate 24 4.0 0 0.0 0 0.0
Total 595 255 371

Reason for call Spring 2009 (Panel 14 Round 1, Panel 13 Round 3, Panel 12 Round 5) Fall 2009 (Panel 14 Round 2, Panel 13 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 10 2.2 13 4.3 19 5.1
Appointment 49 10.8 87 29.0 153 41.1
Request callback 156 34.4 157 52.3 153 41.1
No message 48 10.6 23 7.7 20 5.4
Other 3 0.7 8 2.7 3 0.8
Proxy needed 0 0.0 0 0.0 0 0.0
Request SAQ help 0 0.0 0 0.0 0 0.0
Special needs 4 0.9 0 0.0 0 0.0
Refusal 183 40.3 11 3.7 24 6.5
Willing to participate 1 0.2 1 0.3 0 0.0
Total 454 300 372

Reason for call Spring 2010 (Panel 15 Round 1, Panel 14 Round 3, Panel 13 Round 5) Fall 2010 (Panel 15 Round 2, Panel 14 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 2 0.8 42 8.2 25 5.3
Appointment 44 18.0 214 41.6 309 66.0
Request callback 87 35.7 196 38.1 46 9.8
No message 17 7.0 33 6.4 17 3.6
Other 7 2.9 8 1.6 14 3.0
Request SAQ help 0 0.0 0 0.0 12 2.6
SAQ refusal 0 0.0 0 0.0 1 0.2
Special needs 1 0.4 1 0.2 1 0.2
Refusal 86 35.2 20 3.9 43 9.2
Willing to participate 0 0.0 0 0.0 0 0.0
Total 244 514 468

Reason for call Spring 2011 (Panel 16 Round 1, Panel 15 Round 3, Panel 14 Round 5) Fall 2011 (Panel 16 Round 2, Panel 15 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 16 3.4 46 8.0 72 9.8
Appointment 175 37.6 407 71.0 466 63.5
Request callback 81 17.4 63 11.0 69 9.4
No message 24 5.2 26 4.5 23 3.1
Other 12 2.6 8 1.4 25 3.4
Request SAQ help 1 0.2 2 0.3 32 4.4
SAQ refusal 0 0.0 0 0.0 46 6.3
Special needs 0 0.0 0 0.0 1 0.1
Refusal 157 33.7 21 3.7 0 0.0
Willing to participate 0 0.0 0 0 0.0
Total 466 573 734

Reason for call Spring 2012 (Panel 17 Round 1, Panel 16 Round 3, Panel 15 Round 5) Fall 2012 (Panel 17 Round 2, Panel 16 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 18 5.0 107 13.4 108 12.2
Appointment 130 36.1 517 64.9 584 65.8
Request callback 60 16.7 94 11.8 57 6.4
No message 21 5.8 17 2.1 18 2.0
Other 10 2.8 25 3.1 16 1.8
Proxy needed 0 0.0 1 0.1 2 0.2
Request SAQ help 2 0.6 6 0.8 42 4.7
SAQ refusal 0 0.0 0 0.0 0 0.0
Special needs 1 0.3 0 0.0 0 0.0
Refusal 117 32.5 30 3.8 60 6.8
Willing to participate 1 0.3 0 0.0 0 0.0
Total 360 797 887

Reason for call Spring 2013 (Panel 18 Round 1, Panel 17 Round 3, Panel 16 Round 5) Fall 2013 (Panel 18 Round 2, Panel 17 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 18 4.4 82 10.8 53 9.0
Appointment 143 35.0 558 73.0 370 62.6
Request callback 71 17.4 88 11.5 70 11.8
No message 8 2.0 11 1.4 16 2.8
Other 2 0.5 4 .5 5 0.9
Proxy needed 1 0.2 1 0.1 1 0.2
Request SAQ help 1 0.2 0 0.0 31 5.3
SAQ refusal 0 0.0 0 0.0 0 0.0
Special needs 2 0.5 0 0.0 2 0.3
Refusal 162 39.5 19 2.5 43 7.3
Willing to participate 1 0.2 1 0.1 0 0.0
Total 409 764 591

Reason for call Spring 2014 (Panel 19 Round 1, Panel 18 Round 3, Panel 17 Round 5) Fall 2014 (Panel 19 Round 2, Panel 18 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 11 3.2 71 11.1 62 8.4
Appointment 75 22.1 393 61.5 490 66.5
Request callback 70 20.6 113 17.7 70 9.5
No message 11 3.2 12 1.9 28 3.9
Other 0 0.0 5 0.8 7 0.9
Proxy needed 0 0.0 0 0.0 1 0.1
Request SAQ help 0 0.0 1 0.2 4 0.5
SAQ refusal 0 0.0 0 0.0 0 0.0
Special needs 0 0.0 0 0.0 0 0.0
Refusal 165 48.5 44 6.9 74 10.0
Willing to participate 8 2.4 0 0.0 1 0.1
Total 340 639 737

Reason for call Spring 2015 (Panel 20 Round 1, Panel 19 Round 3, Panel 18 Round 5) Fall 2015 (Panel 20 Round 2, Panel 19 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 10 2.3 61 8.8 55 9.6
Appointment 95 21.8 438 63.4 346 60.7
Request callback 85 19.5 112 16.2 52 9.1
No message 14 3.2 17 2.5 4 0.7
Other 2 0.5 3 0.4 3 0.5
Proxy needed 1 0.2 7 1.0 8 1.4
Request SAQ help 1 0.2 3 0.4 11 1.9
SAQ refusal 0 0.0 0 0.0 0 0.0
Special needs 0 0.0 0 0.0 0 0.0
Refusal 206 47.2 47 6.8 91 16.0
Willing to participate 22 5.0 3 0.4 0 0.0
Total 436 691 570

Reason for call Spring 2016 (Panel 21 Round 1, Panel 20 Round 3, Panel 19 Round 5) Fall 2016 (Panel 21 Round 2, Panel 20 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 8 2.7 64 11.7 48 7.9
Appointment 93 30.9 362 66.2 373 61.7
Request callback 47 15.6 59 10.8 83 13.7
No message 1 0.3 7 1.3 6 1.0
Other 2 0.7 1 0.2 3 0.5
Proxy needed 0 0.0 5 0.9 6 1.0
Request SAQ help 0 0.0 3 0.5 11 1.8
SAQ refusal 0 0.0 0 0.0 0 0.0
Special needs 1 0.3 0 0.0 0 0.0
Refusal 139 46.2 46 8.4 75 12.4
Willing to participate 10 3.3 0 0.0 0 0.0
Total 301 547 605

Reason for call Spring 2017 (Panel 22 Round 1, Panel 21 Round 3, Panel 20 Round 5) Fall 2017 (Panel 22 Round 2, Panel 21 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 10 2.9 51 9.6 35 6.8
Appointment 86 24.9 355 66.6 318 61.4
Request callback 59 17.1 90 16.9 64 12.4
No message 1 0.3 2 0.4 5 1.0
Other 2 0.6 3 0.6 4 0.8
Proxy needed 1 0.3 7 1.3 5 1.0
Request SAQ help 1 0.3 0 0.0 15 2.9
SAQ refusal 0 0.0 0 0.0 0 0.0
Special needs 0 0.0 1 0.2 1 0.2
Refusal 172 49.7 23 4.3 70 13.5
Willing to participate 14 4.0 1 0.2 1 0.2
Total 346 533 518

Reason for call Spring 2018 (Panel 23 Round 1, Panel 22 Round 3, Panel 21 Round 5) Fall 2018 (Panel 23 Round 2, Panel 22 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 5 1.3 37 7.9 38 7.3
Appointment 59 15.4 318 68.1 335 63.9
Request callback 50 13.1 50 10.7 60 11.5
No message 4 1.0 5 1.1 1 0.2
Other 0 0.0 1 0.2 3 0.6
Proxy needed 2 0.5 4 0.9 6 1.1
Request SAQ help 0 0.0 1 0.2 15 2.9
SAQ refusal 0 0.0 0 0.0 0 0.0
Special needs 1 0.3 0 0.0 0 0.0
Refusal 211 55.1 46 9.9 61 11.6
Willing to participate 51 13.3 5 1.1 5 1.0
Total 383 467 524

Reason for call Spring 2019 (Panel 24 Round 1, Panel 23 Round 3, Panel 22 Round 5) Fall 2019 (Panel 24 Round 2, Panel 23 Round 4)
Round 1 Rounds 3 and 5 Rounds 2 and 4
N % N % N %
Address/telephone change 5 1.5 36 7.4 30 5.6
Appointment 59 17.2 328 67.5 344 64.8
Request callback 39 11.4 56 11.5 56 10.5
No message 2 0.6 4 0.8 7 1.3
Other 2 0.6 4 0.8 0 0.0
Proxy needed 2 0.6 6 1.2 11 2.1
Request SAQ help 0 0.0 2 0.4 5 0.9
SAQ refusal 0 0.0 48 9.9 0 0.0
Special needs 0 0.0 0 0.0 0 0.0
Refusal 185 53.9 0 0.0 78 14.7
Willing to participate 49 14.3 2 0.4 0 0.0
Total 353 486 531

Reason for call Spring 2020 (Panel 25 Round 1, Panel 24 Round 3, Panel 23 Round 5) Fall 2020 (Panel 25 Round 2, Panel 24 Round 4, Panel 23 Round 6)
Round 1 Rounds 3 and 5 Rounds 2, 4, and 6
N % N % N %
Address/telephone change 5 0.9 37 6.3 28 2.4
Appointment 142 24.2 332 56.1 278 23.9
Request callback 102 17.4 121 20.4 276 23.7
No message 22 3.8 18 3.0 60 5.2
Other 2 0.3 5 0.8 5 0.4
Proxy needed 6 1.0 3 0.5 10 0.9
Request SAQ help 0 0.0 1 0.2 35 3.0
SAQ refusal 0 0.0 0 0.0 1 0.1
Special needs 0 0.0 0 0.0 1 0.1
Refusal 209 35.7 62 10.5 203 17.5
Willing to participate 98 16.7 13 2.2 266 22.9
Total 586 592 1,163

Reason for call Spring 2021 (Panel 26 Round 1, Panel 25 Round 3, Panel 24 Round 5, Panel 23 Round 7) Fall 2021 (Panel 26 Round 2, Panel 25 Round 4, Panel 24 Round 6, Panel 23 Round 8)
Round 1 Rounds 3, 5, 7 Rounds 2, 4, 6, 8
N % N % N %
Address/telephone change 2 0.6 19 3.4 59 7.0
Appointment 27 8.1 76 13.7 233 27.5
Request callback 101 30.1 240 43.2 287 33.8
No message 34 10.1 21 3.8 41 4.8
Other 8 2.4 48 8.6 8 0.9
Proxy needed 0 0.0 7 1.3 13 1.5
Request SAQ help 3 0.9 17 3.1 15 1.8
SAQ refusal 0 0.0 1 0.2 0 0.0
Special needs 0 0.0 2 0.4 1 0.1
Refusal 87 26.0 87 15.7 176 20.8
Willing to participate 73 21.8 37 6.7 15 1.8
Total 335 555 848

Reason for call Spring 2022 (Panel 27 Round 1, Panel 26 Round 3, Panel 25 Round 5, Panel 24 Round 7, Panel 23 Round 9) Fall 2022 (Panel 27 Round 2, Panel 26 Round 4, Panel 24 Round 8)
Round 1 Rounds 3, 5, 7, 9 Rounds 2, 4, 8
N % N % N %
Address/telephone change 4 0.9 42 5.1 25

4.3

Appointment 91 21.4 215 26.3 99 17.0
Request callback 130 30.5 236 28.9 260 44.5
No message 13 3.1 23 2.8 22 3.8
Other 21 4.9 236 28.9 84 14.4
Proxy needed 4 0.9 6 0.7 6 1.0
Request SAQ help 0 0.0 0 0.0 0 0.0
SAQ refusal 0 0.0 0 0.0 0 0.0
Special needs 0 0.0 0 0.0 0 0.0
Refusal 119 27.9 58 7.1 82 14.0
Willing to participate 44 10.3 2 0.2 6 1.0
Total 426 818 584

Return To Table Of Contents

Table A-16. Files delivered during 2022

Date Description
1/3/2022 DOCM0703.01: Delivery of the 2022 NPI Provider Directory from the Panel 27 MEPS Laptop
1/3/2022 HINS1349.01: Changes in HINS Medical Debt Variables (PROBPY42-PYUNBL42)
1/3/2022 UEGN2885.01: 2020 Specifications for Rolling Events Before Edits
1/3/2022 UEGN3617.01: Deliver to AHRQ for approval variable lists for the PUF non-MPC (DN, OM, and HH) Expenditure Event files (Completed 01/14/22)
1/4/2022 HLTH1067.01: Delivery of Adult and Child Height and Weight for the MEPS Master Files for FY 2020
1/4/2022 PRPL0165.01: Output and Frequencies from 2020 PRPL Program #1
1/4/2022 UEGN2886.01: 2020 Specs for Mom-Baby SBD Rollups
1/4/2022 UEGN3618.01: The 2020 Utilization Standard Error Benchmarking Tables Using Person Use PUF Weights - PERWT20P
1/5/2022 COND0997.01: FY20 Preliminary Conditions File Construction Pregnancy Codes Masking
1/5/2022 EMPL2252.01: Comparison of Panel 23 Employment Population Characteristic Variables Using Unadjusted and Adjusted Data
1/5/2022 GNRL3085.01: List of CAPI Supplemental Sections and Round-Specific Forms
1/5/2022 HINS1346.06: Delivery of the 2020 HINS Month-by-Month, Tricare plan, Private, Medicare, and Medicaid HMO/Gatekeeper, and PMEDIN/DENTIN Variables
1/5/2022 UEGN2887.01: 2020 Specifications for HHA Edits
1/5/2022 WGTS2036.01: Panel 25 Full-Year 2020 SAQ Population Characteristics person weight review output
1/5/2022 WGTS2037.01: Panel 24 Full-Year 2020 SAQ Population Characteristics person weight review output
1/5/2022 WGTS2038.01: Panel 23 Full-Year 2020 SAQ Population Characteristics person weight review output
1/6/2022 ADMN0924.01: Delivery of 2020 FAMID Variables and CPS Family Identifier
1/6/2022 EMPL2247.09: Approval of Recalculated Weighted NUMEMP Medians for Panel 23 Round 5-7 Using Adjusted Data
1/6/2022 UEGN2888.01: 2020 Specs for HHA Free Donor Fix
1/7/2022 WGTS2039.01: Full-Year 2020 SAQ Population Characteristics person weight for the combined panels review output to AHRQ
1/10/2022 EMPL2253.01: FY 2020 Hourly Wage Imputation Output for Approval
1/10/2022 UEGN2889.01: Specifications for Global Fee Bundle Processing
1/10/2022 UEGN2890.01: 2020 Specifications for LOS Imputations
1/11/2022 DOCM0700.02: Delivery of the 2021 MPC files for Sample selection - Wave 1
1/11/2022 DOCM0701.02: Delivery of the 2021 PC Sample file - Wave 1
1/11/2022 DOCM0702.02: Delivery of the 2021 Provider file for NPI coding - Wave 1
1/11/2022 WGTS2011.01: Panel 23 Full-Year 2019: Derivation of Eligibility and Response Indicators for the CPS-like Families
1/12/2022 GNRL4068.01: FY 2020 (Panel 23, Panel 24 and Panel 25) Snapshots of HC Source Tables Including the COND20X, JOBS20X, SAQ, and DCS Tables
1/12/2022 UEGN2891.01: 2020 Specifications for MPC Edits
1/13/2022 PRPL0164.26: FY20 PRPL Specifications Coverage Record and HMO Variables and Variable Editing: Post JOBS Linking
1/13/2022 UEGN2893.01: 2020 Specifications for Post-Edit Rollups
1/13/2022 WGTS2008.01: Deriving location variables (Region and MSA) for Panel 25 Round 1, based on Geo FIPS Codes, using the OMB MSA definitions of both year 2013 and the most recent OMB MSA updates
1/13/2022 WGTS2014.01: P23FY2019 Person-level SAQ Expenditure Weights
1/13/2022 WGTS2015.01: P24FY2019 Person-level SAQ Expenditure Weights
1/13/2022 WGTS2027.01: Deriving Location Variables (Region and MSA) for Panels 23, 24 and 25, Full-Year 2020, based on Geo FIPS Codes, using OMB MSA definitions of both Year 2020 and the Current (2021) Year
1/13/2022 WGTS2028.01: Derivation of MEPS Panel 23 Full-Year 2020 Person Use Weights (Rounds 5-7)
1/13/2022 WGTS2034.01: Create the P23P24P25 Full-Year 2020 "Base Weight" and the Location Variable Delivery File
1/13/2022 WGTS2045.01: Create the P23P24P25 Full-Year 2020 Person Use Weight and Individual Panel Weights Delivery File
1/14/2022 DEMO1019.02: Delivery of the Output Listings for Final Case Review of the MOPID and DAPID Variables� Construction for FY2020
1/14/2022 EMPL2254.01: Full-Year 2020 Wage Top Code Value for AHRQ Approval
1/14/2022 GNRL3086.01: Preliminary Version of the 2020 Full-Year Use PUF Dataset
1/14/2022 UEGN 2895.01: 2020 Specifications for Imputing Expenditures for Capitated Events
1/14/2022 PRPL01666.01: FY20 PRPL Specifications for the OOPELIG and Imputation creation programs
1/14/2022 UEPD1222.05: 2020 INSURC20 variable for use in the Prescribed Medicines Imputation
1/14/2022 WGTS032.01: Creation of CPS Control Total Files Containing the Raking Dimensions for the Full-Year 2020 USE Person Weights
1/14/2022 WGTS2036.01: Developing Panel 25 Self-Administered Questionnaire (SAQ) Use Weights for Full-Year 2020
1/14/2022 WGTS2037.01: Developing Panel 24 Self-Administered Questionnaire (SAQ) Use Weights for Full-Year 2020
1/14/2022 WGTS2038.01: Developing Panel 23 Self-Administered Questionnaire (SAQ) Use Weights for Full-Year 2020 (Rounds 5-7)
1/14/2022 WGTS2039.01: Developing Sample Weights for the MEPS Self-Administered Questionnaire (SAQ) for the Panels 23, 24, and 25 Full-Year 2020 Use File (PUF), and Creating the Full-Year 2020 Person Use SAQ Weights Delivery File
1/14/2022 WGTS2042.01: Creation of CPS Control Total Files Containing the Raking Dimensions for the Full-Year 2020 Self-Administered Questionnaire (SAQ) Use Person Weight
1/14/2022 WGTS2044.01: MEPS Panels 23, 24, and 25 Full-Year 2020: Combine and Rake the P23, P24, and P25 Weights to Obtain the P23P24P25FY20 Person-Level USE Weights
1/18/2022 GNRL4071.01, GNRL4071.02, GNRL4071.03, GNRL4071.04: Delivery of the Person-Level End-Of-Round Files - P23R8/P24R6/P25R4/P26R2
1/18/2022 GNRL4073.01, GNRL4073.02, GNRL4073.03, GNRL4073.04: Delivery of the RU-Level End-Of-Round Files - P23R8/P24R6/P25R4/P26R2
1/18/2022 PRPL0167.01: Output and Frequencies from 2020 PRPL Program #2
1/18/2022 UEGN2896.01: 2020 Specifications for SBD Edits
1/18/2022 UEGN2897.01: 2020 Specifications for MPC Free Donor Fix
1/18/2022 WGTS5038.01: Delivery of the SAQ Use PUF Weight and Individual Panel SAQ Weight Variables for FY2020
1/19/2022 GNRL1902.02: FY 2016 Preliminary Conditions File and Codebook, NCHS Checklist, Delivery Document, and Recode Document – Revised
1/19/2022 GNRL1968.02: FY 2017 Preliminary Conditions File, Codebook, Recode Document, NCHS Checklist, and Delivery Document � Revised
1/19/2022 UEGN2898.01: 2020 Specifications for SBD Free Donor Fix
1/19/2022 WGTS2009.01: Updating Master Variance File Strata and PSUs for Panel 25, Round 1
1/19/2022 WGTS2043.01: MEPS: Establishing Variance Estimation Strata and PSUs for Panel 25, Round 1, Panel 24, Round 3, and Panel 23, Round 5
1/20/2022 EMPL2255.01: Employment Portion of the 2020 Population Characteristics Public Use Release Document – For First Review & Mark-Up
1/20/2022 GNRL4075.01: Delivery of the Single Round Data Exchange (SRD) for Panel 26 Round 2
1/20/2022 GNRL4076.01: Delivery of the Single Round Data Exchange (SRD) for Panel 25 Round 4
1/20/2022 GNRL4077.01: Delivery of the Single Round Data Exchange (SRD) for Panel 24 Round 6
1/20/2022 GNRL4078.01: Delivery of the Single Round Data Exchange (SRD) for Panel 23 Round 8
1/20/2022 INCO0757.01: Delivery of the 2020 (Panel 23 & 24 & 25) Income File
1/20/2022 UEGN2899.01: 2020 Specifications for Household Discount Adjustment Class Variables
1/20/2022 UEGN2900.01: 2020 Specifications for Capitation Imputation Class Variables
1/20/2022 WGTS2009.01: Updating Master Variance File Strata and PSUs for Panel 25, Round 1
1/20/2022 WGTS2031.01: Derivation of the Annualized MEPS Families and Identification of the Responding MEPS Families for MEPS Panel 25 Full-Year 2020
1/20/2022 WGTS2041.01: MEPS: Establishing Variance Estimation Strata and PSUs, and Estimating Standard Errors Using SUDAAN for the Full-Year 2020 PUF, Panel 23, Rounds 5-7, Panel 24, Rounds 3-5, and Panel 25, Rounds 1-3
1/20/2022 WGTS2043.01: MEPS: Establishing Variance Estimation Strata and PSUs for Panel 25, Round 1, Panel 24, Round 3, and Panel 23, Round 5
1/24/2022 GNRL1902.06: FY 2016 Preliminary Conditions File and Codebook, NCHS Checklist, Delivery Document, and Recode Document – Revised
1/25/2022 EMPL2256.01: Full-Year 2020 JOBS File establishment size top code value and extent of JOBS wage top coding for AHRQ approval
1/28/2022 UEGN2901.01: 2020 Specifications for Preparing SBD Nodes for Editing
1/31/2022 FOOD0008.01: FY2020 Food Security PUF Constructed Variable Specifications
2/1/2022 HINS1350.01: FY2021 Design Change Memo: Summary of the MEPS Household Component CAPI for FY2021 (P23 R7-9, P24 R5-7, P25 R3-5, and P26 R1-3) and Potential Effect on 2021 Data Delivery Content
2/1/2022 PRPL0168.01: Output and Frequencies from 2020 PRPL Program #3a – Panel 25
2/2/2022 ADMN0925.01: FY21 Design changes for ADMN/DEMO
2/2/2022 DEMO1019.03: Delivery of the MOPID, DAPID, and Related Variables for FY2020
2/2/2022 EMPL2257.01: Summary of the MEPS Household Component CAPI for FY2021 (P23 R7-9, P24 R5-7, P25 R3-5, and P26 R1-3) and Potential Effect on 2021 Data Delivery Content – EMPLOYMENT
2/2/2022 PRPL0168.02: Output and Frequencies from 2020 PRPL Program #3a – Panel 24
2/4/2022 PRPL0168.03: Output and Frequencies from Rerun of 2020 PRPL Program #3a – Panel 25
2/4/2022 UEGN2902.01: 2020 MPC provider reported high payout ratio or low charge events
2/7/2022 UEGN3621.01: Deliver to AHRQ for approval variable list for the PUF MPC (OP, ER, OB and IP) Expenditure Event files (Completed 02/21/22)
2/8/2022 ADMN0926.01: FY21 ADMN/DEMO Basic edits specs
2/8/2022 EMPL2256.07: Full-Year 2020 JOBS File establishment size top code value and extent of JOBS wage top coding for AHRQ approval
2/8/2022 EMPL2256.08: Full-Year 2020 JOBS File establishment size top code value and extent of JOBS wage top coding for AHRQ approval
2/8/2022 EMPL2258.01: Delivery of Full-Year 2020 Pre-Top-Coded Hourly Wage Variables and Person-Level, Uncondensed Industry and Occupation Codes
2/8/2022 EMPL2259A.01: Full-Year 2020 Wage Top Coding Results
2/8/2022 GNRL3087.01: NCHS Checklist and FY 2020 Use PUF Preliminary Delivery Document
2/8/2022 GNRL3088.01: NCHS Checklist and Preliminary Version of the 2020 JOBS File Delivery Document for Review
2/8/2022 UEGN2904.01: 2020 Specifications for Attaching SBD Expenditures to Facility Events (SBDATTACH)
2/9/2022 EMPL2258.03: Delivery of Full-Year 2020 Pre-Top-Coded Hourly Wage Variables and Person-Level, Uncondensed Industry and Occupation Codes
2/9/2022 UEGN2905.01: 2020 MPC Edit 1 Issue
2/10/2022 PRPL0168.04: Output and Frequencies from Rerun of 2020 PRPL Program #3a – Panel 24
2/10/2022 PRPL0168.05: Output and Frequencies from 2020 PRPL Program #3a – Panel 23
2/10/2022 UEGNs 2881.02 and 2891.02 2020 Specifications for MPC Edits for main and rolling events
2/11/2022 EMPL2259.00: Employment Person-Level Variable & Related Process Specifications for the Full-Year 2021 Population Characteristics/Consolidated PUFs
2/11/2022 GNRL1939.03: HC-190: Delivery of the Final 2016 Conditions File and All Related Files for Web Release – Redelivery
2/11/2022 GNRL1996.02: HC-199: Delivery of the Final 2017 Conditions File and All Related Files for Web Release – Redelivery
2/15/2022 HLTH1068.01: Full-Year 2021 HLTH Basic Edit Specifications
2/16/2022 GNRL3090.01: Preliminary Version of the 2020 Jobs File Codebook and Updated Delivery Document for AHRQ and NCHS Review
2/16/2022 GNRL3091.01: Preliminary Versions of the Codebook and Delivery Document of the FY 2020 Use PUF for Use in AHRQ and NCHS Review
2/16/2022 GNRL3092.01: Preliminary Version of the 2020 Jobs PUF Data Set
2/16/2022 GNRL3093.01: Preliminary Version of the 2020 Use PUF Data Set
2/17/2022 PRPL0168.05R: Output and Frequencies from 2020 PRPL Program #3a – Panel 23
2/17/2022 UEGN2906.01: 2020 Specifications for Rolling SBDs to Facility Event Level
2/17/2022 UEGN2903.01: 2020 Specifications for Breaking Matches Per AHRQ Recommendation for the Provider Reported High Payout or Low Total Charge events
2/17/2022 WGTS5039.01: Delivery of the MVOP Status-Raked Population Characteristics Person Weights for FY20
2/18/2022 GNRL3094.01: FY 2020 Person-Level Consolidated PUF Variable List Changes for AHRQ Review
2/18/2022 Review request- GNRL3089.01: Full-Year 2020 CAPI Specifications and Help Text in HTML Format for Web Release
2/18/2022 UEGN2907.01: 2020 Listing of Two Unmatched HC ER-HS linked sets with Questionable Reported Expenditures
2/21/2022 DOCM0700.03: Delivery of the 2021 MPC Sample file - Wave 2 testing
2/23/2022 PRPL0166.02: FY20 PRPL Specification for Final Formatting of H223 PRPL file
2/23/2022 WGTS2051.01: Panel 23 Full-Year 2020, Evaluation of the nonresponse adjustments applied to the Population Characteristics person weight to reduce nonresponse bias on the poverty distribution estimates.
2/23/2022 WGTS2052.01: Panel 24 Full-Year 2020, Evaluation of the nonresponse adjustments applied to the Population Characteristics person weight to reduce nonresponse bias on the poverty distribution estimates.
2/24/2022 CODE0944.01: 2020 File of GEO Coded Addresses for the MEPS Master Files
2/24/2022 WGTS5039.02: Delivery of the MVOP Status-Raked Population Characteristics Person Weights for FY20 – Version 2
2/24/2022 WGTS2023.01: MEPS Panel 25 Round 1 – Person-Level Weights
2/25/2022 PRPL0169.01: Output and Frequencies from 2020 PRPL Program # 3b
2/28/2022 EMPL2259.01: Employment Person-Level Variable, Related Variable Processing, & New Internal Use Variable Specifications for the Full-Year 2021 Population Characteristics/Consolidated PUFs – Set 1
3/1/2022 GNRL3090.02: Final Version of the 2020 Jobs File Codebook and Delivery Document for AHRQ and NCHS Review
3/1/2022 GNRL3094.01: FY 2020 Person-Level Consolidated PUF Variable List Changes – Final
3/1/2022 WGTS2024.01: Derivation of MEPS Panel 24 Full-Year 2020 Person Use Weights (Rounds 3-5)
3/1/2022 UEGN 2908.01: 2020 Benchmark Tables: Initial Delivery
3/1/2022 UEGN3622.01: The 2020 DN/HHP/OM/HHA Events Final Imputation Files
3/3/2022 DOCM0701.01: Original Pharmacy AF Request
3/3/2022 UEGN 2909.01: 2020 SBD Reconciliation Table
3/4/2022 PRPL0170.01: Output and Frequencies from 2020 PRPL Program #4
3/7/2022 PRPL0166.10: FY20 PRPL Specification for Final Formatting of H223 PRPL file
3/7/2022 WGTS5039.03: Delivery of the MVOP Status-Raked Population Characteristics Person Weights for FY20 – Version 3
3/9/2022 COND0999.01: Delivery of Updated 2016/2017 Conditions Datasets for Review
3/9/2022 UEGN2908.02: 2020 Benchmark Tables: Second Delivery
3/9/2022 UEGN3622.02: The 2020 MVN Final Imputation File
3/10/2022 PRPL0171.01: FY2020 COVRUNOS = 91 Editing Decisions
3/11/2022 PRPL0170.02: Output and Frequencies from 2020 PRPL Program #4 - RERUN
3/14/2022 HINS1351.01: Delivery of the New/Revised Specifications for the FY2021 Panel 23, 24, 25, and 26 HINS Variables
3/14/2022 PRPL0171.07: FY2020 COVRUNOS = 91 Editing Decisions
3/17/2022 GNRL3095.01: HC-218: 2020 Jobs Public Use File Delivery for Web Release
3/17/2022 GNRL3096.01: HC-219: Delivery of the Full-Year 2020 Use PUF for Web Release
3/21/2022 EMPL2259.02: Employment Person-Level Variable, Related Variable Processing, & New Internal Use Variable Specifications for the Full-Year 2021 Population Characteristics/ Consolidated PUFs – Set 1 (revised)
3/21/2022 GNRL4081.01: Delivery of the File Containing Variables Recoded or Dropped from the USE PUF Due to DRB Review – P23/P24/P25
3/22/2022 DSDY0068.01: Delivery of the DSDY Variable Specifications FY21 for AHRQ Approval
3/22/2022 HINS1352.01: Delivery of the Basic and Inter-round Edit Specifications for FY21 HINS Panels 23, 24, 25, and 26
3/25/2022 PRPL0172.02: Comparing PRPL Premium Imputation Groups, Class Variables, and Premiums
3/28/2022 DSDY0069.01: FY 2021 Disability Days Basic Edit Specifications
3/29/2022 ACCS0197.01: 2020 ACCS and COVID Constructed Variable Specifications
3/29/2022 HINS1351.01: Delivery of the New/Revised Specifications for the FY2021 Panel 23, 24, 25, and 26 HINS Variables
3/29/2022 HLTH1070.01: Full-Year 2021 SDOH Basic Edit Specifications
3/30/2022 EMPL2259.03: Employment Person-Level Variable, Related Variable Processing, & New Internal Use Variable Specifications for the Full-Year 2021 Population Characteristics/ Consolidated PUFs – Set 2
3/30/2022 HINS1352.06: Delivery of the Basic and Inter-round Edit Specifications for FY21 HINS Panels 23, 24, 25, and 26
3/30/2022 PRPL0172.06: Comparing PRPL Premium Imputation Groups, Class Variables, and Premiums
3/31/2022 PRPL0172.03: Comparing PRPL Premium Imputation Groups, Class Variables, and Premiums
4/1/2022 ADMN0927.01: FY21 ADMN/DEMO Constructed Variable Specs
4/1/2022 EMPL2259.06: Employment Person-Level Variable, Related Variable Processing, & New Internal Use Variable Specifications for the Full-Year 2021 Population Characteristics/ Consolidated PUFs – Set 1 (revised)
4/4/2022 EMPL2260.01: Full-Year 2021 Employment Source Variable Editing Specifications
4/5/2022 HINS1352.13: Delivery of the Basic and Inter-round Edit Specifications for FY21 HINS Panels 23, 24, 25, and 26
4/6/2022 UEGN2908.09: 2020 Benchmark Tables: Third Delivery
4/6/2022 UEGN3622.03: The 2020 Final Imputation Files: ER, HS, MVE, OP and SBD
4/7/2022 COND1000.01: 2020 Conditions PUF Specifications
4/7/2022 DOCM0700.04: Delivery of the 2021 MPC files for Sample selection - Wave 2
4/7/2022 DOCM0701.03: Delivery of the 2021 PC Sample file - Wave 2
4/7/2022 DOCM0702.03: Delivery of the 2021 Provider file for NPI coding - Wave 2
4/7/2022 EMPL2261.01: Delivery of 2020 Covered Person Records for Employment Variable Imputation
4/7/2022 PRPL0173.01: Delivery of the FY 2020 OOPELIG2 Dataset for Approval
4/8/2022 EMPL2259.07: Employment Person-Level Variable, Related Variable Processing, & New Internal Use Variable Specifications for the Full-Year 2021 Population Characteristics/ Consolidated PUFs – Set 1 (revised)
4/12/2022 EMPL2259.08: 2021 FY USE Employment Specs - Set 2 review
4/12/2022 GNRL3097.01: NCHS Checklist and Preliminary Version of the 2020 Conditions File Delivery Document and Recode Materials for Review
4/12/2022 GNRL3098.01: NCHS Checklists and Preliminary Versions of Documents for the FY 2020 Non-MPC Event (DV, OM, and HH) PUFs
4/12/2022 HLTH1071.01: Full-Year 2021 HLTH Constructed Variable Specifications
4/18/2022 CODE0946.01: Specifications for the FY 2021 Person-level GEO Coded Address File
4/19/2022 UEPD1224.01: Delivery of the FY2021 PMED Basic Edit specifications
4/20/2022 GNRL3099.01: FY 2020 Preliminary Conditions File, Codebook, and Delivery Document
4/20/2022 GNRL3100.01: Preliminary Versions of the 2020 Non-MPC Event (DV, OM, and HH) PUF Codebooks and Documents for Use in AHRQ and NCHS Review
4/20/2022 GNRL3101.01: 2020 Preliminary Non-MPC Event (DV, OM, and HH) PUF Data Sets
4/21/2022 PRPL0174.01: Delivery of the FY 2020 PRPL Hot Deck Imputation Results for Approval
4/22/2022 WGTS5040.01: Delivery of the Nursing Home Adjusted Person Weights for FY20
4/26/2022 EMPL2259.09: Employment Person-Level Variable, Related Variable Processing, & New Internal Use Variable Specifications for the Full-Year 2021 Population Characteristics/ Consolidated PUFs – Set 1 (revised)
4/26/2022 GNRL3100.02: Preliminary Versions of the 2020 Non-MPC Event (DV, OM, and HH) PUF Codebooks and Documents for Use in AHRQ and NCHS Review – Updated
4/26/2022 UEGN 2911.01: 2020 Predictive Mean Matching Imputation Method Applied to the Expenditure Imputation of the non-MPC Event Types
4/28/2022 EMPL2259.12: Employment Person-Level Variable, Related Variable Processing, & New Internal Use Variable Specifications for the Full-Year 2021 Population Characteristics/ Consolidated PUFs – Set 1 (revised)
4/29/2022 HLTH1071.05: Full-Year 2021 HLTH Constructed Variable Specifications
5/2/2022 PCND0163.01: 2021 PCND Constructed Variable Specifications
5/3/2022 EMPL2259.19: Employment Person-Level Variable, Related Variable Processing, & New Internal Use Variable Specifications for the Full-Year 2021 Population Characteristics/ Consolidated PUFs - Set 1 (revised)
5/3/2022 PRPL0174.04: Delivery of the FY 2020 PRPL Hot Deck Imputation Results for Approval
5/4/2022 HLTH1072.01: Full-Year 2021 SDOH Constructed Variable Specifications
5/4/2022 PRPL0174.07: Delivery of the FY 2020 PRPL Hot Deck Imputation Results for Approval
5/5/2022 EMPL2259.20: Employment Person-Level Variable, Related Variable Processing, & New Internal Use Variable Specifications for the Full-Year 2021 Population Characteristics/ Consolidated PUFs - Set 1 (revised)
5/5/2022 UEGN 2912.01: 2020 Predictive Mean Matching Imputation Method Applied to the Expenditure Imputation of the MPC Event Types
5/6/2022 EMPL2259.22: Employment Person-Level Variable, Related Variable Processing, & New Internal Use Variable Specifications for the Full-Year 2021 Population Characteristics/ Consolidated PUFs - Set 1 (revised)
5/9/2022 ACCS0197.09: 2020 ACCS and COVID Constructed Variable Specifications
5/10/2022 ACCS0198.01: 2021 ACCS and COVID Basic Edit Specifications
5/10/2022 GNRL3102.01: NCHS Checklists and Preliminary Versions of Documents for the FY 2020 MPC Event (IP, ER, OP, OB) PUFs
5/10/2022 PCND0165.01: 2021 PCND Basic Edit Specifications
5/10/2022 WGTS2025.01: Creation of CPS Control Total Files Containing the Raking Dimensions for the Panel 25 Round 1 Person Weights.
5/10/2022 WGTS2033.01: Derivation of the annualized MEPS Families and Identification of the Responding MEPS Families for the Panel 23 Full-Year 2020
5/10/2022 WGTS2062.01: Derivation of MEPS Panel 23 Full-Year 2020 Person Use Weights (Rounds 5-7) – with additional raking dimension R_MVOP
5/11/2022 PRPL0175.01: Linked Panel 23 PRPL Records where the JOBSIDX is not in the 2020 Jobs File Due to Special Panel 23 Job Roster Adjustment
5/12/2022 PCND0163.05: 2021 PCND Constructed Variable Specifications
5/17/2022 PCND0163.08: 2021 PCND Constructed Variable Specifications
5/17/2022 UEPD1225.03: Delivery of 2020 PMED PUF (TC20XTABS.lst, TC20XTABS.xlsx)
5/17/2022 UEPD1225.01: Delivery of the 2020 PMED PUF (RX20V01 and RX20V02)
5/18/2022 GNRL3103.01: Preliminary Versions of the 2020 MPC Event (IP, ER, OP, OB) PUF Codebooks and Documents for Use in AHRQ and NCHS Review
5/18/2022 GNRL3104.01: Preliminary Versions of the 2020 MPC Event (IP, ER, OP, OB) PUF Data Sets
5/18/2022 WGTS2066.01: FY2020 Combined Panels Expenditure person weight review output
5/23/2022 EMPL2259.23: Employment Person-Level Variable & Related Process Specifications for the Full-Year 2021 Population Characteristics/Consolidated PUFs
5/23/2022 WGTS5041.01: Delivery of the FY 2020 Expenditure File Original Person Weight
5/23/2022 WGTS2044.02: MEPS Panels 23, 24, and 25 Full-Year 2020: Combine and Rake the P23, P24, and P25 Weights to Obtain the P23P24P25FY20 Person-Level USE Weights
5/25/2022 COND1001.01: Ad Hoc Request: Conditions Data Comparison FY20/FY19
5/25/2022 COND1002.01: FY 2020 Preliminary CLNK File
5/26/2022 HLTH1064.02: Delivery of FY19 VSAQ and Population Characteristics Variables
5/26/2022 WGTS5042.01: Delivery of the FY 2020 Expenditure File Final Person Weight – PERWT20F
5/27/2022 EMPL2259.24: Employment Person-Level Variable & Related Process Specifications for the Full-Year 2021 Population Characteristics/Consolidated PUFs
5/27/2022 UEGN2908.04: 2020 Benchmark Tables: Fourth Delivery
5/27/2022 UEGN2913.01: 2021 Questions Related to the Implementation of Recommended Changes in the Processing of Flat Fees
5/31/2022 CODE0948.01: PMED Matching Programs Log and LST Files for FY21 Wave 1
5/31/2022 UEGN2908.09: 2020 Benchmark Tables: Fourth Delivery
5/31/2022 UEGN3622.04: The Version 2 of the 2020 Final Imputation Files: ER, HS, MVE, OP and SBD
5/31/2022 UEPD1225.15: Delivery of the 2020 PMED PUF (RX20V05.PDF, RX20V06.PDF, RX20V05X.PDF, TOP10RX20_USE.PDF, TOP10TC20_USE.PDF, TOP10TC20_EXP.PDF, TOP25RX20_EXP.PDF)
6/1/2022 WGTS2067.01: Full-Year 2020 Panel 23 SAQ Expenditure person weight review output
6/1/2022 WGTS2068.01: Full-Year 2020 Panel 24 SAQ Expenditure person weight review output
6/1/2022 WGTS2069.01: Full-Year 2020 Panel 25 SAQ Expenditure person weight review output
6/1/2022 UEGN3618.02: The 2020 Utilization Standard Error Benchmarking Tables Using the Person-Level PERWT20F Weight and Updated Panel Weight
6/2/2022 WGTS2070.01: Full-Year 2020 combined panels SAQ expenditure person weight review output
6/3/2022 PRPL0176.01: Delivery of the FY 2020 OOPELIG3 Dataset, Benchmarking results, POSTIMPFIN results for final approval of OOPPREM variables, the Preliminary Encrypted Delivery Dataset, and the Preliminary Unencrypted Delivery Dataset
6/6/2022 GNRL4068.02: Addendum to the FY 2022 (Panel 23, Panel 24 and Panel 25) Delivery Database Snapshots: Edited Segments since the Previous Delivery of 1/12/22
6/6/2022 UEPD1225.06: Delivery of 2020 PMED PUF (RX20V05X) SAS dataset and the format files (RX20V05X.sas7bcat, rx20v05xf.sas and rxexpf2.sas)
6/8/2022 WGTS2074.01: Full-Year 2020 DCS expenditure weight review output
6/9/2022 WGTS2079.01: Full-Year 2020 Consolidated PUF Family weights review output
6/10/2022 WGTS2072.01: Full-Year 2020 individual panel expenditure weights review output
6/13/2022 GNRL3105.01: HC-220d, HC-220e, HC-220f, and HC-220g: 2020 MPC Expenditure Event Types (IP, ER, OP, and OB) Codebook and Dataset Files for Web Release
6/13/2022 GNRL3106.01: HC-220b, HC-220c, and HC-220h: 2020 Expenditure Event Codebook for Non-MPC Event Types (DV, OM, and HH) and Dataset Files for Web Release
6/13/2022 UEPD1225.07: Deliver the 2020 PMED PUF data (RX20V06.sas7bdat) and the format files ((RX20V06.sas7bcat, rxexpv06f.sas and rxexpv06f2.sas)
6/14/2022 GNRL3107.01: NCHS Checklist and Preliminary Version of Delivery Document for the FY 2020 Prescribed Medicines (PMED) PUF
6/16/2022 GNRL3108.01: Preliminary Versions of Documents for the FY 2020 non-MPC Event (DV, OM, and HH) and MPC Event (IP, ER, OP, OB) PUFs – Updated
6/16/2022 WGTS5043.01: Delivery of the Individual Panel 23, Panel 24, and Panel 25 SAQ Expenditure Weight for FY2020
6/16/2022 WGTS5044.01: Delivery of the Poverty-Adjusted Family-Level Weight, CPS-Like Family-Level Weight, Poverty-Adjusted DCS and SAQ Weights for FY2020
6/16/2022 WGTS5045.01: Delivery of the Individual Panel Raked Person Weights for P23/P24/P25 FY20
6/17/2022 GNRL3110.01: Section 3 of FY2020 Non-MPC Event (H220b, H220c, h220h), MPC Event (H220d, H220e, H220f, and H220g), and PMED Event (H220a) Files Document for Review
6/22/2022 GNRL3111.01: Preliminary Versions of the 2020 Prescribed Medicines (PMED) Event PUF Codebook and Delivery Document for Use in AHRQ and NCHS Review
6/22/2022 GNRL3112.01: Preliminary Version of the 2020 PMED Event PUF Data Set
6/23/2022 GNRL3110.05: Section 3 of FY2020 Non-MPC Event (H220b, H220c, h220h), MPC Event (H220d, H220e, H220f, and H220g), and PMED Event (H220a) Files Document for Review
6/24/2022 PCND0164.01: 2020 Priority Conditions Benchmarking Table
6/27/2022 GNRL4088.01: Delivery of the Single Round Data Exchange (SRD) for Panel 25 Round 5
6/27/2022 GNRL4089.01: Delivery of the Single Round Data Exchange (SRD) for Panel 24 Round 7
6/27/2022 GNRL4090.01: Delivery of the Single Round Data Exchange (SRD) for Panel 23 Round 9
6/27/2022 GNRL4091.01- GNRL4091.03: Delivery of the RU-Level End-Of-Round Files - P23R9/P24R7/P25R5
6/27/2022 GNRL4092.01- GNRL4092.03: Delivery of the Person-Level End-Of-Round Files - P23R9/P24R7/P25R5
6/28/2022 GNRL3110.07: Section 3 of FY2020 Non-MPC Event (H220b, H220c, h220h), MPC Event (H220d, H220e, H220f, and H220g), and PMED Event (H220a) Files Document for Review
6/28/2022 GNRL3111.02: Preliminary Versions of the 2020 Prescribed Medicines (PMED) Event PUF Codebook and Delivery Document for Use in AHRQ and NCHS Review – Updated
7/6/2022 GNRL3048.02: HC-211: 2019 Jobs Public Use File Delivery for Web Release – Updated
7/8/2022 GNRL3113.01: Delivery of the FY2020 Non-MPC Event (H220b, H220c, h220h) PUF HTML Files for Web Release
7/8/2022 GNRL3114.01: Delivery of the FY2020 MPC Event (H220d, H220e, H220f, and H220g) PUF HTML Files for Web Release
7/12/2022 GNRL3115.01: NCHS Checklist and Preliminary Version of the Delivery Document for the FY 2020 Consolidated Data PUF
7/12/2022 GNRL3117.01: NCHS Checklist and Preliminary Version of Delivery Document for the FY 2020 Person-Round-Plan (PRPL) PUF
7/12/2022 UEGN3625.01: The 2020/2019 QC Finding Tables of the PUF Event Expenditures
7/12/2022 UEGN3626.01: The Telehealth Visit Type Other Specify Text Strings Recoding for FY2021
7/14/2022 DOCM0700.05: Delivery of the 2021 MPC files for Sample selection - Wave 3
7/14/2022 DOCM0701.04: Delivery of the 2021 PC Sample file - Wave 3
7/14/2022 DOCM0702.04: Delivery of the 2021 Provider file for NPI coding - Wave 3
7/14/2022 GNRL3120.01: HC224: Preliminary Version of the 2020 Consolidated File
7/15/2022 CODE0949.01: Coding progress report for prescribed medicines
7/15/2022 EMPL2263.01: Analysis of the FY 2020 Hourly Wage Imputation Process
7/15/2022 GNRL3116.01: HC-220a: Delivery of the 2020 Prescribed Medicines (PMED) PUF and all Related Files for Web Release
7/15/2022 GNRL3118.01: Delivery of the FY2020 Non-MPC Event (H220b, H220c, h220h) PUF Document PDF Files for Web Release
7/15/2022 GNRL3119.01: Delivery of the FY2020 MPC Event (H220d, H220e, H220f, and H220g) PUF Document PDF Files for Web Release
7/20/2022 GNRL3120.02: HC224: Preliminary Version of the 2020 Consolidated File - Updated
7/20/2022 GNRL3121.01: FY 2020 Conditions PUF Preliminary Versions of Codebook and Delivery Document for Use in AHRQ Review
7/20/2022 GNRL3122.01: HC222: Preliminary Version of the 2020 Conditions Data Set
7/20/2022 GNRL3123.01: Preliminary Version of the 2020 Appendix to the Event PUFs Delivery Document, and Codebooks for Review
7/20/2022 GNRL3124.01: HC220I: Preliminary Versions of the 2020 Appendix to the Event PUFs Data Sets
7/20/2022 GNRL3125.01: Preliminary Versions of the Codebook and Document for the FY 2020 Consolidated Data PUF for Use in AHRQ and NCHS Review
7/20/2022 GNRL3126.01: Preliminary Version of the 2020 Person-Round-Plan (PRPL) PUF Data Set
7/20/2022 GNRL3127.01: FY 2020 Person-Round-Plan PUF Preliminary Versions of Codebook and Delivery Document for Use in AHRQ and NCHS Review
7/20/2022 GNRL3121.01: FY 2020 Conditions PUF Preliminary Versions of Codebook and Delivery Document for Use in AHRQ Review
7/22/2022 CODE0949.02: Coding progress report for prescribed medicines
7/22/2022 UEGN3627.01: The FY2021 Initial Variable Construction Specifications
7/25/2022 EMPL2264.01: Panel 26 Round 1 Jobholder with 14 Retirement Jobs – Decision Required
7/26/2022 GNRL3121.02: Final Versions of the 2020 Conditions PUF Codebook and Delivery Document for AHRQ Review
7/26/2022 GNRL3123.02: Final Versions of the 2020 Appendix to the Event Files PUF Codebooks and Delivery Document for AHRQ Review
7/26/2022 GNRL3125.07: Final Versions of the Codebook and Delivery Document for the FY 2020 Consolidated Data PUF
7/26/2022 GNRL3127.02: FY 2020 Person-Round-Plan PUF Final Versions of Codebook and Delivery Document
7/26/2022 GNRL3127.06: FY 2020 Person-Round-Plan PUF Preliminary Versions of Codebook and Delivery Document for Use in AHRQ and NCHS Review
7/26/2022 GNRL3127.09: FY 2020 Person-Round-Plan PUF Final Versions of Codebook and Delivery Document
7/26/2022 UEGN 2914.01: 2021 Specifications for Processing Flat-Fee Bundles
7/27/2022 GNRL4091.04: Delivery of the RU-Level End-Of-Round File - P26R3
7/27/2022 GNRL4092.04: Delivery of the Person-Level End-Of-Round File - P26R3
7/27/2022 WGTS1995.01: Derivation of the Annualized MEPS Families and Identification of the Responding MEPS Families for the Panel 23 Full-Year 2019
7/27/2022 WGTS2067.01: Create the P23 FY2020 Person-level SAQ Expenditure Weights
7/28/2022 GNRL4093.01: Delivery of the Single Round Data Exchange (SRD) for Panel 26 Round 3
7/28/2022 WGTS2013.01: Developing Sample Weights for the MEPS Veteran Self-Administered Questionnaire (VSAQ) Component for the Full-Year 2019 Consolidated (Expenditure) Public Use File
7/28/2022 WGTS2068.01: Create the P24 FY2020 Person-level SAQ Expenditure Weights
7/28/2022 WGTS2069.01: Create the P25 FY2020 Person-level SAQ Expenditure Weights
7/28/2022 WGTS2070.01: Create the P23P24P25 FY2020 Person-level SAQ Expenditure Weights
7/28/2022 WGTS2071.01: Creation of CPS Control Total Files Containing the Raking Dimensions for the Full-Year 2020 Self-Administered Questionnaire (SAQ) Expenditure Person Weight
7/28/2022 WGTS2072.01: Raking Panels 23, 24 and 25 (Panel 23/rounds 5-7, Panel 24/rounds 3-5 and Panel 25/rounds 1-3) Separately for the Individual Panel Full-Year 2020 Person-Level Weights Including the Poverty Status
7/28/2022 WGTS2074.01: Developing Sample Weights for the MEPS Diabetes Questionnaire Component (DCS) for the Panels 23, 24, and 25 Full-Year 2020 Expenditure File (PUF)
7/28/2022 WGTS2080.01: Delivery Files for the FY 2020 Individual Panel Expenditure Person-Level Weights, Panel 23, 24 and Panel 25
7/29/2022 CODE0949.03: Coding progress report for prescribed medicines
8/1/2022 FOOD0009.01: FY 2021 Food Security Basic Edit Specifications
8/1/2022 UEGN3628.01: The DN Text Strings Recoding for FY2021
8/3/2022 WGTS2047.01: New Weighting Memo #2047.01: Final: Estimating Standard Errors Using SUDAAN for the Panel 25, Round 1 PIT 2020 Person-Level Weights�Checking the Variance Strata and PSUs
8/4/2022 WGTS2050.01: 2050.01 Do_Not_Email: Derivation of MEPS Panel 23 Full-Year 2020 Person Use Weights (Rounds 5-7)
8/4/2022 WGTS2055.01: New Weighting Memo #2055.01: MEPS Panels 23, 24, and 25 Full-Year 2020: Combine and Rake the P23, P24, and P25 Weights to Obtain the P23P24P25FY20 Person-Level USE Weights
8/4/2022 WGTS2065: New Weighting Memo #2065.01: Create the P23P24P25 Full-Year 2020 POV19 Raked Person Weight and Individual Panel Weights Delivery File
8/5/2022 CODE0949.04: Coding progress report for prescribed medicines
8/5/2022 CODE0949.05: Coding progress report for prescribed medicines
8/5/2022 CODE0949.06: Coding progress report for prescribed medicines
8/5/2022 DOCM0704.01: File of Provider Names for FY 2021
8/5/2022 GNRL3127.03: FY 2020 Person-Round-Plan PUF Final Versions of Codebook and Delivery Document – Updated
8/5/2022 UEGN2916.01: 2021 Proposal to Reset HC Reported Missing Copayment Amount for VA Covered Events
8/5/2022 WGTS2048.01: New Weighting Memo #2048.01: Panel 23 Full-Year 2020: Derivation of Eligibility and Response Indicators for the CPS-like Families
8/5/2022 WGTS2081.01: New Weighting Memo #2081.01: Food Security Weights for MEPS Panels 23, 24 and 25 Full-Year 2020
8/8/2022 CODE0950.01: MEPS Delivery of the ICD-10-CM/CCSR Crosswalk and COND Coding Uncodeable Text Strings for FY21
8/8/2022 COND1003.01: FY21 Basic Edit Specifications
8/9/2022 GNRL3131.01: NCHS Checklist and Preliminary Version of the 2020 Food Security File Delivery Document for Review
8/11/2022 FOOD0009.03: FY 2021 Food Security Basic Edit Specifications
8/12/2022 CODE0949.05: Coding progress report for prescribed medicines
8/12/2022 GNRL3128.01: HC-224: Full-Year 2020 Consolidated Use, Expense, and Insurance PUF Delivery for Web Release
8/12/2022 GNRL3129.01: HC-220I: Delivery of the Final Appendix to the 2020 Event Files and all Related Files for Web Release
8/12/2022 GNRL3130.01: HC-222: Delivery of the Final 2020 Conditions File and All Related Files for Web Release
8/17/2022 ACCS0199.01 2021 ACCS Other Specify Text String Recoding
8/17/2022 GNRL3133.01: Preliminary Versions of 2020 Food Security File Codebook and Delivery Document
8/17/2022 GNRL3134.01: HC221: Preliminary Version of the 2020 Food Security Data Set
8/19/2022 COND1003.04: FY21 Basic Edit Specifications
8/19/2022 GNRL3132.01: HC-223: Delivery of the 2020 Person Round Plan (PRPL) PUF and Related Files for Web Release
8/19/2022 UEGN2917.01: 2021 Benchmark Tables Including MPC Estimates Obtained Using Machine Learning Models
8/19/2022 UEGN3629.01 - The Machine Learning Imputation Test Files
8/22/2022 PCND0163.02: 2021 PCND Constructed Variable Specifications
8/24/2022 GNRL4091.05 and GNRL4092.05: Delivery of End-Of-Round files (RU-Level and Person-Level) -P27R1
8/25/2022 GNRL3133.02: Final Versions of the 2020 Food Security File Codebook and Delivery Document
8/26/2022 CODE0949.07: Coding progress report for prescribed medicines
8/26/2022 GNRL3134.02: HC221: Final Version of the 2020 Food Security Data Set
8/26/2022 GNRL4096.01: Delivery of the Single Round Data Exchange (SRD) for Panel 27 Round 1
8/30/2022 DOCM0705.01: MEPS – 2021 Conditions Authority File After the 2021 HC Condition Coding
8/30/2022 UEGN3630.01: Specifications for the 2021 Pre-Imputation UEGN Files
9/1/2022 EMPL2265.01: 2021 Multi-Round Comment Review (MRCR) Performed by Employment Group
9/2/2022 CODE0949.08: Coding progress report for prescribed medicines
9/7/2022 DOCM1002.19: Group 1 of Patient Profiles
9/7/2022 WGTS2039.02: Developing Sample Weights for the MEPS Self-Administered Questionnaire (SAQ) for the Panels 23, 24, and 25 Full-Year 2020 Use File (PUF), and Creating the Full-Year 2020 Person Use SAQ Weights Delivery File
9/7/2022 WGTS2060.01: Creation of CPS Control Total Files Containing the Poverty Raking Dimensions for the Full-Year 2020 Reflecting 2019 Poverty Distribution
9/9/2022 GNRL3135.01: HC-221: Delivery of the 2020 Food Security PUF and Related Files for Web Release
9/13/2022 EMPL2266.01: FY2021 JOBS File Specifications for Approval
9/13/2022 WGTS2061.01: Derivation of MEPS Panel 24 Full-Year 2020 Special Person Weights (Rounds 3-5) to be used in Poverty Control Totals Computation
9/13/2022 WGTS2063.01: MEPS Panels 23 and 24 Full-Year 2020: Combine and Rake the P23 and P24 Weights to Obtain the P23P24FY20 Experimental Person-Level Weights to be used in Poverty Control Totals Computation
9/13/2022 UEGN3632.01: The 2021 Utilization Count Variables Construction Specification.
9/14/2022 EMPL2266.06: FY2021 JOBS File Specifications for Approval
9/14/2022 GNRL3122.02: HC222: Preliminary Version of the 2020 Conditions Data Set – Updated
9/14/2022 HINS1353,1354,1355: Delivery of the FY21 EPCP Cross-tabs, with additional requested tables - panels 24, 25, and 26
9/14/2022 WGTS2038.02: Developing Panel 23 Self-Administered Questionnaire (SAQ) Use Weights for Full-Year 2020 (Rounds 5-7)
9/14/2022 WGTS2078.01: MEPS Panel 26 Round 1 – Computation of the 2020 NHIS weights that will serve as base weights for the Panel 26 Round 1 DU MEPS weights
9/14/2022 WGTS2054.01: Creating Factors to Adjust the 2020 Full-Year Consolidated PUF Person Weights Development to Better Reflect the Number of Persons who Died or Spent Part of the Year in a Nursing Home
9/15/2022 DOCM1002.21: Group 2 of Patient Profiles
9/15/2022 HINS1356.01: Delivery of the FY21 EPCP Cross-tabs, with additional requested tables - panel 23
9/15/2022 PCND0163.13: 2021 PCND Constructed Variable Specifications
9/20/2022 PRPL0177.01: Full-Year 2021 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing
9/21/2022 GNRL1902.03: FY 2016 Preliminary Conditions File and Codebook, NCHS Checklist, Delivery Document, and Recode Document - Revised
9/21/2022 GNRL1968.03: FY 2017 Preliminary Conditions File, Codebook, Recode Document, NCHS Checklist, and Delivery Document - Revised
9/22/2022 DOCM1002.23: Group 3 of Patient Profiles
9/23/2022 GNRL3130.02: HC-222: Delivery of the Final 2020 Conditions File and All Related Files for Web Release – Updated
9/27/2022 PRPL0177.05: Full-Year 2021 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing
9/27/2022 PRPL0177.13: Full-Year 2021 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing
9/29/2022 CODE0951.01: Delivery of the Coded FY2021 Industry and Occupation Files
9/29/2022 PRPL0177.15: Full-Year 2021 PRPL File Revisions to Coverage Record and HMO Variables, JOBS Linking, and Post-Linking Editing
9/30/2022 CODE0952.01: MEPS 2021 Delivery of PMED Final Reports for Uncodeable, Compounds, Foreign Meds, No-MDDB, Drug Groupings
9/30/2022 COND1004.01: 2021 Preliminary Conditions File Specifications
10/3/2022 DOCM0707.01: Delivery of 2021 Static Tables for SOP After the 2021 HC SOP Coding
10/3/2022 GNRL3109.01: FY2021 Person-Level Use PUF Variable List Changes for AHRQ Review
10/5/2022 CODE0952.07: MEPS 2021 Delivery of PMED Final Reports for Uncodeable, Compounds, Foreign Meds, No-MDDB, Drug Groupings
10/5/2022 INCO0760.01: Delivery of the 2020 NHIS Link File
10/6/2022 DOCM1002.25: Group 4 of Patient Profiles
10/11/2022 EMPL2267.01: FY2021 Panel 26 Editing of High Wage Outliers or Substantially Different Wages – Request for Approval
10/11/2022 EMPL2267.01: FY2021 Panel 26 Editing of High Wage Outliers or Substantially Different Wages – Request for Approval
10/11/2022 EMPL2268.01: FY2021 Panel 26 Editing of Low Wage Outliers or Wages that Do Not Change – Request for Approval
10/13/2022 DOCM1002.27: Group 5of Patient Profiles
10/13/2022 EMPL2266.12: FY2021 JOBS File Specifications for Approval
10/14/2022 DOCM0708.01: Delivery of 2021 Static Tables for SRCS After the 2021 HC SRCS Coding
10/14/2022 EMPL2266.15: FY2021 JOBS File Specifications for Approval
10/14/2022 GNRL1939.04: HC-190: Delivery of the Final 2016 Conditions File and All Related Files for Web Release – Redelivery
10/14/2022 GNRL1996.03: HC-199: Delivery of the Final 2017 Conditions File and All Related Files for Web Release – Redelivery
10/14/2022 WGTS2018.01: Raking Panels 23 and 24 (Panel 23/rounds 3-5 and Panel 24/rounds 1-3) Separately for the Individual Panel Full-Year 2019 Person-Level Weights Including the Poverty Status
10/14/2022 WGTS2019.01: Delivery Files for the FY 2019 Individual Panel Expenditure Person-Level Weights, Panel 23 and Panel
10/14/2022 WGTS2046.01: Panel 24 Full-Year 2020: Derivation of Eligibility and Response Indicators for the CPS-like Families
10/17/2022 DOCM0706.01: Delivery of the 2021 MPC Pre-Matching Household Component Production File
10/19/2022 EMPL2267.02: FY2021 Panel 26 Editing of High Wage Outliers or Substantially Different Wages – Request for Approval
10/19/2022 HINS1361.01: Results of the QC Cross-Tabs for the HINS 2021/Gatekeeper FY variables
10/19/2022 WGTS2066.01: Panel 23, Panel 24, and Panel 25 Combined, Full-Year 2020: Raking Person Weights Including the Poverty Status to Obtain the Expenditure Person Weights
10/20/2022 HINS1361.04: Results of the QC Cross-Tabs for the HINS 2021/Gatekeeper FY variables
10/20/2022 WGTS5046.01: Delivery of the ADMN/DEMO Variables Used for Weights Development for FY21 (P23, P24, P25, and P26)
10/26/2022 CODE0953.01: Delivery of the 2021 PMED Authority File and Files for Matching Programs after PMED Coding
10/27/2022 COND1004.07: 2021 Preliminary Conditions File Specifications
10/28/2022 CODE0954.01: Delivery of 2021 Static Table for WHOBILL After the 2021 HC WHOBILL Coding
10/28/2022 EMPL2266.25: FY2021 JOBS File Specifications for Approval
10/31/2022 EMPL2269.01: FY2021 Panel 23 Editing of High Wage Outliers or Substantially Different Wages – Request for Approval
10/31/2022 EMPL2270.01: FY2021 Panel 23 Editing of Low Wage Outliers or Wages that Do Not Change – Request for Approval
11/1/2022 EMPL2271.01: FY 2021 Wage Imputation Specification – Review and Approval Requested
11/1/2022 HINS1359.01 and HINS1360.01: FY21 Panel 23 rounds 7-9 and Panel 24 rounds 5-7 At Any Time/At Interview Date/At 12/31/21 variables and QC tabulations
11/1/2022 UEGN 2926.01: 2021 HC Edit Specs
11/3/2022 EMPL2269.02: FY2021 Panel 23 Editing of High Wage Outliers or Substantially Different Wages – Request for Approval
11/4/2022 HINS1357.01 and HINS1358.01: FY21 Panel 25 rounds 3-5 and Panel 26 rounds 1-3 At Any Time/At Interview Date/At 12/31/21 variables and QC tabulations
11/4/2022 WGTS5047.01: Delivery of the Preliminary Weight Flag for FY21
11/7/2022 EMPL2272.01: FY2021 Panel 25 Editing of High Wage Outliers or Substantially Different Wages – Request for Approval
11/7/2022 EMPL2273.01: FY2021 Panel 25 Editing of Low Wage Outliers or Wages that Do Not Change – Request for Approval
11/10/2022 COND1004.10: 2021 Preliminary Conditions File Specifications
11/14/2022 DOCM0709.01: MEPS - Data Destruction - NHIS 2018 Sample Files
11/14/2022 WGTS2073.01: Updating Master Variance File Strata and PSUs for Panel 26, Round 1
11/14/2022 WGTS2079.01: Derivation of the 2020 Full-Year Expenditure Family Weight, MEPS and CPS-Like, for Panel 23, Panel 24, and Panel 25 Combined
11/14/2022 WGTS2049.01: Panel 25 Full-Year 2020: Derivation of Eligibility and Response Indicators for the CPS-like Families
11/15/2022 WGTS2026.01: Derivation of the MEPS Panel 25 Full-Year 2020 Person Use Weights (Rounds 1-3)
11/16/2022 EMPL2274.01: FY2021 Panel 24 Editing of High Wage Outliers or Substantially Different Wages – Request for Approval
11/16/2022 EMPL2275.01: FY2021 Panel 24 Editing of Low Wage Outliers or Wages that Do Not Change – Request for Approval
11/21/2022 PRPL0178.01: FY21 PRPL Specifications Coverage Record and HMO Variables and Variable Editing: Post JOBS Linking
11/21/2022 UEGN3633.01: Deliver to AHRQ for approval specifications for the FY21 non-MPC (DN, OM, and HH) Expenditure Event files
11/21/2022 WGTS2084.01: MEPS: Establishing Variance Estimation Strata and PSUs for Panel 26, Round 1, Panel 25, Round 3, Panel 24, Round 5, and Panel 23, Round 7
11/21/2022 WGTS2087.01: Delivery File Providing a Linkage between the Person Records Sampled for MEPS Panel 25 and the Person Records in the 2019 NHIS Weights File
11/22/2022 FOOD0010.01: FY 2021 Food Security PUF Constructed Variables and Labels
11/22/2022 WGTS2053.01: Derivation of MEPS Panel 24 Full-Year 2020 Person Use Weights (Rounds 3-5)
11/30/2022 PRPL0178.08: FY21 PRPL Specifications Coverage Record and HMO Variables and Variable Editing: Post JOBS Linking
12/1/2022 DOCM0710.01: Delivery of Person-Level Base and Family Pseudo Weight for FY21
12/1/2022 WGTS5048.01: Delivery of Person-Level Base Weight, Individual Panel Base Weight, Family Membership Flag, and MSA variables for FY21 (P23, P24, P25, and P26)
12/6/2022 UEPD1227.02: 2021 (Panel 23 & 24 & 25 & 26) Household Prescribed Medicine and Associated Files - Set 1
12/7/2022 DEMO1020.01: Delivery of the Output Listings for Case Review of the MOPID and DAPID Variables� Construction for FY2021
12/7/2022 EMPL2276.01: Approval of Weighted NUMEMP Medians for Panel 23 Round 7-9, Panel 24 Round 5-7, Panel 25 Round 3-5, and Panel 26 Round 1-3 of FY 2021
12/9/2022 ADMN0928.01: FY21 Weighted Cross-tabs delivery of ADMN and DEMO variables
12/9/2022 DOCM0711.01: 2022 MPC sample file specs
12/9/2022 DOCM0712.01: 2022 PC sample file specs
12/9/2022 DOCM0713.01: 2022 provider file for NPI coding specs
12/9/2022 UEGN3634.01: Delivery of the FY21 Pre-Imputation files
12/12/2022 EMPL2277.01: FY 2021 Hourly Wage Imputation Output for Approval
12/12/2022 GNRL3136.01: Delivery of Data Reference Year PowerPoint Slide (2019 – 2022)
12/13/2022 HINS1363.01: Delivery of the HINS Ever Insured in FY 2021 variables LASTAGE and INSCV921 to be added to the internal "MEPS Master Files"
12/13/2022 WGTS2100.01: Panel 24 Full-Year 2021 Person Weight review output
12/14/2022 COND1005.01: AdHoc: Threshold Testing - Dataset H
12/14/2022 HINS136201: Results of the weighted QC Cross Tabs for the HINS 2021 HMO/Gatekeeper FY variables
12/14/2022 UEGN 2927.01: 2021 Specification for Total Charge Imputation
12/14/2022 UEGN36350.1: Delivery of the 2020 Post-Imputation Files for the MEPS Master Files
12/14/2022 UEPD1227.03: Redelivery of 2021 Household Prescribed Medicine file due to the changes of ADMN/DEMO variable VADISABILITY
12/14/2022 WGTS2098.01: Panel 26 Full-Year 2021 Person Weight review output
12/15/2022 UEGN 2914.03: 2021 Specifications for Processing Flat-Fee Bundles
12/15/2022 UEGN2926.02: 2021 HC Edits Specs
12/16/2022 PRPL0178.16: FY21 PRPL Specifications Coverage Record and HMO Variables and Variable Editing: Post JOBS Linking
12/16/2022 UEGN 2953.01: 2021 Listing of Events with Questionable HC Reported Expenditures Found in the Pre-Editing QCs
12/19/2022 UEGN 2928.01: 2021 Specifications for Initializing MPSAMTs
12/19/2022 UEGN 2929.01: 2021 Specifications for MPC Rolling Event Edits
12/19/2022 UEPD1227.04: 2021 (Panel 23 & 24 & 25 & 26) PMED Supplemental File - Set 2: Person-Level File and Additional 3 Segment Variable Files
12/19/2022 WGTS2101.01: Panel 23 Full-Year 2021 Person Weight review output
12/20/2022 EMPL2278.01: Full-Year 2021 Wage Top Code Value for AHRQ Approval
12/20/2022 HINS1364.01: Delivery of the 2021 HINS Month-by-Month, Tricare plan, Private, Medicare, and Medicaid HMO/Gatekeeper, and PMEDIN/DENTIN Variables
12/20/2022 HINS1365.01: Delivery of the 2021 HINS Building Block Variables and COVERM Tables for Panel 23 Rounds 7 – 9, Panel 24 Rounds 5 – 7, Panel 25 Rounds 3 – 5, and Panel 26 Rounds 1 – 3
12/20/2022 HINS1366.01: Delivery of the FY 2021 HINS Medicare Part D supplemental variables
12/20/2022 UEGN2930.01: 2021 Specifications for SBD Disavowal Imputation
12/20/2022 UEGN 2931.01: 2021 Specifications for HHA Rolling Event Edits
12/20/2022 UEGN3637.01: Feedback on the RTI�s FY2021 HHA Test Files
12/21/2022 EMPL2279.01: Delivery of the Full-Year 2021 Pre-Top-Coded Hourly Wage Variables and Person-Level, Uncondensed Industry and Occupation Codes
12/23/2022 COND1006.01: 2021 CLNK File Specifications
12/27/2022 EMPL2280.01: Full-Year 2021 JOBS File Establishment Size Top Code Value and Extent of JOBS Wage Top Coding for AHRQ Approval
12/27/2022 UEPD1227.05: 2021 (Panel 23 & 24 & 25 & 26) PMED Supplemental File - set 3: Person/Round-Level Files
12/27/2022 UEGN3638.01: Deliver to AHRQ for approval specifications for the FY21 MPC (OB, OP, ER, and IP) Expenditure Event files
12/28/2022 EMPL2280.02: Full-Year 2021 JOBS File – Cases not flagged for top coding that may require edits
12/28/2022 UEGN2955.01: 2021 Listing of Events with Questionable HC Reported Expenditures Found in the HC Edits Output
12/29/2022 GNRL3136.09: Delivery of Data Reference Year PowerPoint Slide (2019 – 2022)
12/30/2022 UEGN3639.01: MEPS Design Change Memo for FY2022 – UEGN

Return To Table Of Contents