Which of the following would be the best data source to identify opportunities to improve timeliness?

Before you decide which quality measures to report, it is helpful to know what kinds of data you will need to produce the scores for the measure. Sometimes, the data you want to report already exist because somebody else collected it. But in other cases, report sponsors have to collect the data themselves.

Depending on the measure, data can be collected from different sources, including medical records, patient surveys, and administrative databases used to pay bills or to manage care. Each of these sources may have other primary purposes, so there are advantages and challenges when they are used for the purposes of quality measurement and reporting.

Administrative Data

In the course of providing and paying for care, organizations generate administrative data on the characteristics of the population they serve as well as their use of services and charges for those services, often at the level of individual users. The data is gathered from claims, encounter, enrollment, and providers systems. Common data elements include type of service, number of units (e.g., days of service), diagnosis and procedure codes for clinical services, location of service, and amount billed and amount reimbursed.

Advantages of Administrative Data

Challenges of Administrative Data

Patient Medical Records

A medical record is documentation of a patient's medical history and care. The advent of electronic medical records has increased the accessibility of patients’ files. Wider use of electronic medical record systems is expected to improve the ease and cost of using this information for quality measurement and reporting.

Advantages of Medical Records

Challenges of Medical Records

Patient Surveys

Survey instruments capture self-reported information from patients about their health care experiences. Aspects covered include reports on the care, service, or treatment received and perceptions of the outcomes of care. Surveys are typically administered to a sample of patients by mail, by telephone, or via the Internet.

Advantages of Patient Surveys

Challenges of Patient Surveys

Comments from Individual Patients

Comments from individual patients, often referred to as anecdotal information, include any type of information on health care quality that is gathered informally rather than by carefully designed research efforts. Anecdotal information is becoming increasingly more common as private Web sites make it possible for health care consumers to share their personal experiences with health plans, hospitals, and, most prominently, physicians.

Advantages of Patient Comments

Challenges of Patient Comments

Learn more about the influence and use of individual patient comments.

Standardized Clinical Data

Certain kinds of facilities, such as nursing homes and home health agencies, are required to report detailed information about the status of each patient at set time intervals. The Minimum Data Set (MDS), the required information for nursing homes, and the Outcome and Assessment Information Set (OASIS), the data required by Medicare for certified home health agencies, store the data used in quality measures for these provider types.

Advantages of Standardized Clinical Data

Challenges of Standardized Clinical Data

The Need for Standardization

The use of quality measures to support consumer choice requires a high degree of data validity and reliability. To make sure that comparisons among providers and health plans are fair and that the results represent actual performance, it is critical to collect data in a careful, consistent way using standardized definitions and procedures.


Also in "Select Measures to Report"

Data Quality and MDM

David Loshin, in Master Data Management, 2009

5.3.5 Timeliness

Timeliness refers to the time expectation for accessibility and availability of information. Timeliness can be measured as the time between when information is expected and when it is readily available for use. In the MDM environment, this concept is of particular interest, because synchronization of data updates to application data with the centralized resource supports the concept of the common, shared, unique representation. The success of business applications relying on master data depends on consistent and timely information. Therefore, service levels specifying how quickly the data must be propagated through the centralized repository should be defined so that compliance with those timeliness constraints can be measured.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123742254000059

GIS Applications for Socio-Economics and Humanity

Peiyao Zhang, ... Kai Cao, in Comprehensive Geographic Information Systems, 2018

3.10.2.2 The Features of Historical Data

As the foundation of HGIS, historical data have the following features:

Timeliness. Time attribute is the core attribute of historical data, and it is also quite different from modern data. Historical data should have clear chronological information in the first instance. In the cross-sectional study at a certain point of time, attention should be paid to the identification and unification of the time standard of each database. In the study of transition changes, focus should be on the selection and integration of data in order to form effective time series. In addition to transition changes, Man (2002) has also introduced the concept of growing period of geographic features. By extracting the beginning and ending of a certain state of a research object, Man then enables himself to describe the changes of the research object in the two dimensions of time and space. This method has now been applied to the construction process of CHGIS and other related research.

Spatiality. The occurrence of historical events and phenomena can be associated with a specific location. Spatial attributes are not directly linked with historical data, but are the necessary information base for HGIS research and application. Spatial information of historical data, on one hand, is derived from the original record of the document or excavation site; on the other hand, it also forms the derivative identification in the study. Man introduced the concept of carrier data, pointing out that “carrier data can be defined as the data which can carry the elements of other data and the implementation of the relevant spatial location of the data.” It is also recommended that it is necessary to select the appropriate carrier data to build up the spatial framework for historical data. The selection of carrier data usually requires the consideration of spatial resolution, the stability and possibility of the data in the time series and other related characteristics (Man, 2008).

Multisource. In addition to timeliness and spatiality, the thematic nature of historical data is another important attribute representing the main object and content of data description. Meanwhile, the source of historical data for a given topic is also diverse. For example, the number of schools recorded during the Republic of China period in Peking City may include the official historical records and investigations in the schools, official materials and archives on campus, as well as a travel guide for Peking City, celebrity memoires, and other forms. Among them, official investigations can be segmented into national, regional, and district levels. The diversity of data sources may, on one hand, offer good support for the comprehensiveness and continuity of the sample data; and on the other hand, increase the difficulty to ensure the accuracy and standardization of the data in the study.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124095489096597

Stream processing in IoT: foundations, state-of-the-art, and future directions

X. Liu, ... R. Buyya, in Internet of Things, 2016

8.2.3.1 Timeliness and Instantaneity

Ensuring the timeliness of processing requires the ability to collect, transfer, process, and present the stream data in real-time. As the value of data may vanish over time rather rapidly, the streaming architecture needs to perform all the calculation and communication on the fly with the data that has newly arrived.

On the other hand, the data generation in IoT environments mainly depends on the status of data sources. The amount of data that is generated at low-activity periods can be dramatically less than the number observed at peak times. Usually the stream-processing platform has no control over the volume and complexity of the incoming data stream. Therefore, it is necessary to build an adaptive platform that can elastically scale with respect to fluctuating processing demands, and still remain portable and configurable in order to stay agile in response to the continuously shifting processing needs.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128053959000083

Master Data Synchronization

David Loshin, in Master Data Management, 2009

11.2 Aspects of Availability and Their Implications

What do timeliness, currency, latency, consistency, coherence, and determinism mean in the context of synchronized master data? In Chapter 5 we explored the dimensions of data quality, yet the rules and thresholds for those dimensions are defined from the point of view of each of the business applications using the data sets. But as characteristics of a synchronized master view, these terms take on more operational meanings. Within each application, accesses to each underlying managed data set are coordinated through the internal system services that appropriately order and serialize data reads and writes. In this manner, consistency is maintained algorithmically, and the end users are shielded from having to be aware of the mechanics employed.

However, the nature of the different ways that master data are managed raises the question of consistency above the system level into the application service layer. This must be evaluated in the context of the degree of synchronization needed by the set of client applications using the master data asset. Therefore, the decisions regarding the selection of an underlying MDM architecture, the types of services provided, the application migration strategy, and the tools needed for integration and consolidation all depend on assessing the business process requirements (independently of the way the technology has been deployed!) for data consistency.

Each of the key characteristics impacts the maintenance of a consistent view of master data across the client application infrastructure, and each of the variant MDM architecture styles will satisfy demands across these different dimensions in different ways. Our goal is to first understand the measures of the dimensions of synchronization and how they address the consistency of master data. Given that understanding, the MDM architects would assess the applications that rely on the master data asset as well as the future requirements for applications that are planned to be developed and determine what their requirements are for master data synchronization. At this point, each of the architectural styles can be evaluated to determine how well each scores with respect to these dimensions. The result of these assessments provides input to inform the architect as to the best architecture to meet the organizational needs.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123742254000114

DQAF Concepts

Laura Sebastian-Coleman, in Measuring Data Quality for Ongoing Improvement, 2013

Timeliness

In its most general definition, timeliness refers to the appropriateness of when an event happens. In relation to data quality content, timeliness has been defined as the degree to which data represent reality from the required point in time (English, 1999). With regard to processing, timeliness (also referred to as latency) is associated with data availability, the degree to which customers have the data they need at the right time. Measures of timeliness can be made in relation to a set schedule or to the occurrence of an event. For example, timeliness is the degree to which data delivery from a source system conforms to a schedule for delivery. In large data assets, data is made available once processing is complete. Unsuccessful processing can result in delays in data availability. The timeliness of events within data processing itself can be a measure of the health of the data delivery mechanisms.

Data lag (the time between when data is updated in its source and when it is made available to data consumers), information float (the time between when a fact becomes known and when it is available for use) and volatility (the degree to which data is likely to change over time) all influence the perception of data currency and timeliness. Given these factors, for most people it’s clear that data can be “timely” according to a schedule and still not meet the needs of data consumers. In that case, the thing that needs to be changed is the schedule itself (for example, to reduce data lag), not how we objectively measure timeliness. As with other aspects of data, metadata about data timeliness—schedules and update processes—needs to be shared with data consumers so that they can make better decisions about whether and how to use data.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123970336000055

Generic versus discipline-specific skills

David E Woolwine, in Practising Information Literacy, 2010

Timeliness

Almost all respondents noted that timeliness depended upon disciplinary context, especially upon the research question at hand. An older work, even in science, may still be timely if it is a work that has established a field or subfield. The senior scientist also noted that, if a set of research questions had fallen into neglect but was worth reviving, something only an expert in the field might know based on knowledge of content, the most timely source may be quite old. The senior social scientist was willing to look at some materials which were decades old. Both philosophers felt that one could easily reference material thirty to fifty years old.

What is ‘timeliness’, therefore, in practice? ‘Timely’ is what is taken to be relevant—it is not primarily defined by date of publication. It can only be determined by an understanding of the research questions existing in the discipline itself, or even in a subfield of a discipline. More often than not, a determination of what is ‘timely’ depends upon the specific research topic and can only be fully grasped within a disciplinary framework. This indicates that both the library faculty and students will be able to identify a timely work only if they know some of the content of a discipline.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978187693879650008X

Configuring the Physical Architecture

Richard F. Schmidt, in Software Engineering, 2013

12.2.5.1 Design responsiveness

Software design responsiveness involves the timeliness of the software product’s response to user inputs, external interface stimuli, or interactions with elements of the computing environment. The software structural design must be evaluated to determine if the design can be enhanced to improve the software product’s responsiveness to requested actions. The following guidelines are provided that address enhancing the software responsiveness to user-based requests5 :

1.

Provide timely feedback concerning the requested action:

Promptly acknowledge a user input.

Provide data processing progress indicators for actions taking a significant amount of time.

Respond initially by providing the most important information then disclosing additional information when it becomes available.

Alert the user concerning the anticipated delay needed to respond to complicated requests.

2.

Prioritize data processing actions:

Postpone low-priority data processing actions until computing resources are available.

Anticipate data processing needs and perform actions in advance, when possible.

3.

Optimize task queue backlog:

Reorder the task queue based on priority.

Flush tasks that are overtaken by events or may no longer be needed.

4.

Multitasking performance supervision:

Monitor multitasking progress and adjust resource allocations to optimize task execution and termination.

Balance task duration and resource commitments.

Predict task durations and determine task discreteness, concurrency, and synchronization tactics.

Establish resource monitoring and intercession supervision procedures by anticipating resource conflicts and deadlock situations.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124077683000124

Data Quality

Jan L. Harrington, in Relational Database Design (Third Edition), 2009

Publisher Summary

Data quality ensures the accuracy and timeliness of data and it is much easier to ensure data quality before data get into a database than after they are stored. To be useful, the data in a database must be accurate, timely, and available when needed. Data quality problems arise from a wide range of sources and have many remedies. One source of data quality problems is missing data. There are two general sources—data that are never entered into the database and data that are entered but deleted when they shouldn't be. Another quality problem is that of incorrect data. Incorrect data is probably the worst type of problem to detect and prevent. Often the incorrect data aren't detected until someone external to an organization makes a complaint. Determining how the error occurred is equally difficult because sometimes the problems are one of a kind. Data can sometimes also be incomprehensible. Unlike incorrect data, it is relatively easy to spot incomprehensible data, although finding the source of the problem may be as difficult as it is with incorrect data. The last problem with data is inconsistency of data. If the data are to be consistent, then the name and address of a customer must be stored in exactly the same way in both databases.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012374730300019X

DQAF Measurement Types

Laura Sebastian-Coleman, in Measuring Data Quality for Ongoing Improvement, 2013

Assessing the Validity of Data Content

While many measures of completeness and timeliness are interwoven with data processing, measures of validity are based on comparisons to a standard or rule that defines the domain of valid values. Most fields for which validity can be measured are populated with codes or other shorthand signifiers of meaning. Codes may be defined in a reference table or as part of a range of values or based on an algorithm. Conceptually, a validity measure is just a test of membership in a domain. From a technical point of view, ways of measuring validity depend on how the data that defines the domain is stored. Validity checks can be understood in terms of the way their domains are defined.

Basic validity check—Comparison between incoming values and valid values as defined in a code table or listing.

Basic range of values check—Comparison of incoming data values to values defined within a stated range (with potential for a dynamic range) including a date range.

Validity check based on a checksum or other algorithm—for example, to test the validity of Social Security numbers, National Provider Identifiers, or other numbers generated through an algorithm.

The question most people want to answer in relation to validity is: How much of this data is invalid? In some cases, domains contain a small set of values, and validity measures at the level of the individual value are comprehensible. For example, in most systems, the gender code consists of representations of male, female, and unknown. While the concept of gender in our culture is changing (so that some systems will have values to signify transgender or changes in gender), the set of codes values used to represent gender is still relatively small. On the other hand, codes representing medical procedures (CPT codes, HCPCS codes, ICD procedure codes) number in the tens of thousands. To make measurement of validity comprehensible (especially for high cardinality code sets), it is helpful to have both detail of the individual codes and a measure of the overall level of valid and invalid codes.

When we first defined the DQAF, we included pairs of validity measures: a detail measure at the level of the value and a summary accounting for the roll-up of counts and percentages of valid and invalid codes. However, it quickly became clear that the validity summary measure looked pretty much the same (and thus could be represented the same way) regardless of the basis of the detailed validity measure. So, we created one type to describe all validity roll-ups.

Roll-up counts and percentage of valid/invalid from a detailed validity measurement.

Validity checks against code tables or ranges of values focus on one field at a time. More complex forms of validity involve the relationship between fields or across tables. These can be more challenging to measure because they often require documentation of complex rules in a form that can be used to make the comparisons.

Validity of values in related columns on the same table; for example ZIP codes to State Abbreviation Codes.

Validity of values based on a business rule; for example, Death Date should be populated only when Patient Status equals “deceased”.

Validity of values across tables; for example, ZIP Code to State Code on two different tables.

As with the single-column measures of validity, each of these validity measurement types should include a roll-up of valid-to-invalid results (see Figure 6.2). In many cases, because of differences in the timing of data delivery and the nature of the data content, cross-table relationships cannot be measured effectively through in-line measures. These are better thought of as measures of integrity that can be taken as part of periodic measurement of data.

Which of the following would be the best data source to identify opportunities to improve timeliness?

Figure 6.2. Basic Validity

The process to confirm data validity is similar regardless of how the domain of valid values is defined (set of valid values, range of values, or rule). First, the data that will be validated must be identified and its domain defined. Next, record counts for the distinct value set must be collected from the core data. Then the distinct values can be compared to the domain and validity indicators can be assigned. The results of the comparisons constitute the detailed measurements. For reporting purposes, these results can be rolled up to an overall percentage of valid and invalid values.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123970336000067

Scalable Data Warehouse Architecture

Daniel Linstedt, Michael Olschimke, in Building a Scalable Data Warehouse with Data Vault 2.0, 2016

2.1.3 Analytical Complexity

Due to the availability of large volumes of data with high velocity and variety, businesses demand different and more complex analytical tasks to produce the insight required to solve their business problems. Some of these analyses require that the data be prepared in a fashion not foreseen by the original data warehouse developers. For example, the data that should be fed into a data mining algorithm should have different characteristics regarding the variety, volume and velocity of data.

Consider the example of retail marketing: the campaign accuracy and timeliness need to be improved when moving from retail stores to online channels where more detailed customer insights are required [11]:

In order to determine customer segmentation and purchase behavior, the business might need historical analysis and reporting of customer demographics and purchase transactions

Cross-sell opportunities can be identified by analyzing market baskets that show products that can be sold together

To understand the online behavior of their customers, click-stream analysis is required. This can help to present up-sell offers to the visitors of a Web site

Given the high amount of social network data and user-generated content, businesses tap into the data by analyzing product reviews, ratings, likes and dislikes, comments, customer service interactions, and so on.

These examples should make it clear that, in order to solve such new and complex analytical tasks, data sources of varying complexity are required. Also, mixing structured and unstructured data becomes more and more common [11].

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128025109000027

Which of the following may cause the laboratory to reject a blood specimen quizlet?

A specimen may be rejected by the lab if: The tube was not initialed. The blood is hemolyzed. The tube was not transported properly.
questions and answers.

What is the most important consideration before using a space heater in a healthcare facility?

What is the MOST important consideration before using a space heater in a health care facility? - The space heater can be plugged directly into an outlet without the use of an extension cord. A phlebotomist arrives in the blood draw area where a mother and her 3-year-old toddler have been seated.

Which of the following can cause hemolysis of a blood specimen?

Hemolysis resulting from phlebotomy may be caused by incorrect needle size, improper tube mixing, incorrect filling of tubes, excessive suction, prolonged tourniquet, and difficult collection.