Which of the following is a disadvantage of using the critical incident technique for job analysis?

Postclosing Integration: Mergers, Acquisitions, and Business Alliances

Donald M. DePamphilis, in Mergers, Acquisitions, and Other Restructuring Activities (Tenth Edition), 2019

Employees: Addressing the “Me” Issues Immediately

Employees need to understand early on what is expected of them and why. Simply telling employees what to expect following a takeover is not enough. They need to know why. Any narrative provided by the acquiring firm to explain the justification for the takeover and its implications for employees of both the target and acquired firms can be disruptive in that it can create angst by challenging current practices and core beliefs.

How employees feel about a takeover (i.e., their emotional mindset) can determine its ultimate success.19 Methodologies exist such as “critical incident techniques” to determine how employees feel about a takeover.20 Without employee acceptance, employees can resist integration efforts either openly (actively) or passively. The latter is perhaps the most insidious as employees can appear to be cooperating but in fact are not. They are either slow to respond to requests for information or behavioral changes or not responding at all. In the absence of overt resistance (openly refusing to do something), passive resistance (sometimes called passive aggressive behavior) undermines the effective integration of the acquirer and target firms.

Acquirer and target firm employees are interested in any information pertaining to the merger and how it will affect them, often in terms of job security, working conditions, and total compensation. For example, if the acquirer expects to improve worker productivity or reduce the cost of benefits, it is critical to explain that the long-term viability of the business requires such actions as the markets in which the firms compete have become increasingly competitive.

Target firm employees often represent a substantial portion of the acquired company’s value. The CEO should lead the effort to communicate to employees at all levels through on-site meetings or via teleconferencing. Communication to employees should be as frequent as possible; it is better to report that there is no change than to remain silent. Direct communication to all employees at both firms is critical. Deteriorating job performance and absence from work are clear signs of workforce anxiety. Many companies find it useful to create a single information source accessible to all employees, be it an individual whose job is to answer questions or a menu-driven automated phone system programmed to respond to commonly asked questions. The best way to communicate in a crisis, however, is through regularly scheduled employee meetings.

All external communication in the form of press releases should be coordinated with the PR department to ensure that the same information is released concurrently to all employees. Internal e-mail systems, voice mail, or intranets may be used to facilitate employee communications. In addition, personal letters, question-and-answer sessions, newsletters, and videotapes are highly effective ways to deliver messages.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128150757000061

Teamwork and Team Performance Measurement

Christopher W. Wiese, ... Eduardo Salas, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Rating Scales

One of the most common forms of assessment of team constructs is rating scales. Rating scales ask people to indicate their attitudes, opinions, beliefs, or feelings on a large number of items. Thus, rating scales can be used to assess reflections of team behaviors, opinions concerning how team members feel, or even beliefs concerning team member's cognitions. There are a number of ways to develop rating scales. If one desires to gauge opinions concerning team members' behavior, the first step would be to compile a description of the behaviors of interest. One approach to identifying and accumulating these behaviors would be to use a critical incident technique. A critical incident technique gathers information from SMEs regarding memorable events, both positive and negative, which occurred in a specific situation (Chell, 1998). These behaviors would then serve as the content for the questions being asked on the measure.

Once the content has been identified, there are several options on how to present the information. A few possibilities include: (1) scale points can simply be reflected by numbers, (2) descriptive words can be used at end scale points, or (3) verbal descriptions can be used at each scale point (Meister, 1985). Another choice comes with deciding how many scale points to be used. First, there must be enough scale points to fully represent the question being asked as a dichotomous scale might not represent the different degrees of the attribute being assessed. Another consideration is whether to use an odd number of scale points, which provides the rater with the option of choosing a neutral response. These options have their respective pros and cons, and it is completely up to the measure designer which option they prefer. An example of a ratings scale is provided in Figure 2.

Which of the following is a disadvantage of using the critical incident technique for job analysis?

Figure 2. Example of a rating scale measuring feedback.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868220175

Motivational Psychology of Human Development

Lutz von Rosenstiel, ... Günter W. Maier, in Advances in Psychology, 2000

“The Motivation to Work”—Herzberg's Controversial Concept

The study of Herzberg and his colleagues (Herzberg et al., 1959), which was to become one of the most cited studies in I/O psychology, started with rather vague theoretical considerations that lead to unexpected empirical findings. Using Flanagan’s (1954) critical incident technique, Herzberg et al. interviewed employees about in which situations they were particularly satisfied and how this influenced their achievement orientation. Then they reversed the question and asked in which specific situations the employees were particularly dissatisfied and how this again influenced their achievement orientation.

To summarize the results, Herzberg et al. (1959) found that so-called “motivators” such as previous achievements, assigned responsibility, and the opportunity for mental growth are most basic for experiencing satisfaction. It is a core feature of these motivators that they are directly linked to the content of the work process (“content variable”) and thus relate to intrinsic motivation and the willingness to achieve alike. In contrast, so-called “hygiene factors” such as interpersonal relations, company policies, working conditions, and payment, are predominant causes for dissatisfaction at work. Hygiene factors relate to the framework conditions of work (“context variable”), which at best may contribute to the removal of the existing dissatisfaction but directly cause neither satisfaction nor the willingness to achieve.

The findings reported by Herzberg et al. (1959) were replicated whenever the same method was used. However, if different methods were used, the expected results could almost never be obtained. Equally problematic seems to be the theoretical argument that the participants’ responses may be biased according to a tendency to maintain self-esteem: The causes for satisfaction are attributed to oneself, for example, one’s own achievements, whereas the causes for dissatisfaction are attributed to external causes, for example, the supervisor. But neither these nor further critical arguments regarding Herzberg’s method, categorization system, and theory (cf. Gebert & von Rosenstiel, 1996; Locke & Henne, 1986; Neuberger, 1974; Vroom, 1964) hampered the widespread use of Herzberg’s theory in applied settings. This may be due to the specific recommendations derived from the theory on how to shape work content and work environment to stimulate work motivation. Other models, such as the job characteristics theory by Hackman and Oldham (1974), while being more carefully defined, have not received the same attention as Herzberg’s approach.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0166411500800175

Heuristics

Michael D. Mumford, Lyle E. Leritz, in Encyclopedia of Social Measurement, 2005

Psychometric Methods

In the psychometric approach, heuristics are identified and assessed with respect to people, and the performance differences observed among people, rather than with respect to the particular actions observed for a given set of tasks. This point is of some importance because the psychometric approach tends to emphasize heuristics associated with performance differences, discounting heuristics that are applied in a similar fashion by all individuals. To elicit these individual differences, psychometric studies rely on one of two basic techniques: self-report and tests.

Self-Report

One version of the self-report approach assumes that people are aware of, and understand the implications of, the strategies they apply as they execute tasks lying in different domains. Of course, given this assumption, it is possible to identify heuristics simply by asking people about the heuristics they apply in different endeavors. In accordance with this proposition, in one study, undergraduates were asked to indicate whether they applied heuristics such as visualization, step-by-step analysis, and analogies when working on interpersonal, academic, or daily life tasks. It was found that people could describe where, and how frequently, they applied these general heuristics. This direct reporting approach, however, has proved less effective when specific heuristics, particularly heuristics applied nearly automatically, are under consideration.

When it is open to question whether people can say when, and how, they apply heuristics, an alternative strategy is often used. In an approach referred to as the critical-incidents technique, people are not asked to report on strategies applied. Instead, they are asked to indicate the behaviors they exhibited when performing certain tasks. Specifically, they are asked to describe the setting, the event prompting performance, the actions taken in response to the event, and the outcomes of these actions. Content analysis comparing good and poor performers, or good and poor performances, is used to identify more or less effective performance strategies and the behaviors associated with application of these strategies. Assessment occurs by having people indicate either the frequency or extent to which they manifest behaviors associated with an identified heuristic.

Testing

In testing, people are not asked to describe heuristics or the behaviors associated with these heuristics. Instead, the capability for applying heuristics is inferred based on peoples' responses to a series of test items. In one variation on this approach, the objective scoring approach, test items are developed such that the problems presented call for certain processing activities. Response options are structured to capture the application of heuristics linked to effective process application. In one study, test items were developed to measure selective encoding, selective comparison, and selective combination. It was found that gifted students differed from nongifted students in that they were better able to identify relevant information (selective encoding).

In the subjective scoring approach, problems (typically open-ended problems) are developed in a way that a variety of responses might be used to address the problem. Judges are then asked to review peoples' responses and assess the extent to which they reflect the application of certain heuristics. This approach is applied in scoring the divergent thinking tests commonly used to measure creativity by having judges evaluate responses with respect to three creative strategies: generating a number of ideas (fluency), generating unusual ideas (originality), and generating ideas through the use of multiple concepts (flexibility). In another illustration of this approach, army officers were asked to list the changes in peoples' lives that might happen if certain events occurred (e.g., What would happen if the sea level rose?). Judges scored these responses for heuristics such as use of a longer time frame, application of principles, and a focus on positive versus negative consequences. Use of longer time frames and application of principles were found to be related to both performance on managerial problem-solving tasks and indices of real-world leader performance, yielding multiple correlations in the 0.40s.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985001687

State of play – measuring the current visibility of the librarian and library

Aoife Lawton, in The Invisible Librarian, 2016

Measuring impact and value in public libraries

For public libraries, the impact they have on their communities is what is needed to demonstrate value. Researchers have called for more qualitative methods, such as the social audit of Newcastle and Somerset library services (McMenemy, 2007). The use of quantitative statistics, such as book issues and the number of readers, do not give an adequate picture of value, in fact, they can weaken the position of the library. The real value of a public library is the positive experience that a person has when they use a library, online or in person. This is why it is difficult to measure the value of the public library, as it is difficult to measure experience. The critical incident technique is a useful qualitative method that has been used in public libraries to evaluate the quality of service (Wong, 2013). The societal impact of a public library on its community is where the social value of a public library lies. Whether the public library lends itself to a more informed citizenry needs to be measured. Qualitative methodologies, such as profiling, have been successful at making the societal benefits of public library use tangible. Profiling has enabled UK Online Centres (UKOC) to translate faceless data into outcomes that are ‘tangible, practicable and workable’ (UKOC, quoted in Rooney-Browne, 2011). Profiles of how public libraries are helping people in UK communities are available on ukonlinecentres.com. Rooney-Browne provides a comprehensive overview of methods for measuring public library value (Rooney-Browne, 2011).

It has been observed that the best way to measure impact is to do so over a long period of time (Brophy, 2006). In libraries where readers are fairly static (e.g. academic and school libraries), this may be easier to achieve. With school libraries, reading scores of students can be measured over a two- or three-year period, for example. For academic libraries, ranking of universities and student grades can be measured over time, particularly if librarians are delivering courses that form part of the curriculum.

Brophy introduced a LoI (Level of Impact) scale from −2 to 6 when evaluating the level of impact The People’s Network (PN) had in public libraries in the UK. PN involved the installation of computers in public libraries. In the US, the Washington iSchool is measuring the impact of internet accessibility in US public libraries. Over 800 US public libraries have registered for the impact survey. Another way for librarians to measure value/impact is to align services to (1) the mission of the organisation and (2) standards for libraries in their specialist area. A Public Library Improvement Model for Scotland was launched in 2014 with five quality indicators. In Wales, a framework for public libraries was published in 2014 (Welsh Government, 2014), which includes impact measurement for the first time. Alarmingly, in England there are no national standards or national impact measures for public libraries. This is devolved to local authorities on a voluntary basis (Anstice, 2014). The Sieghart report does not include a recommendation for standards for public libraries in England. This is out of step with best practice internationally. The USA, Australia and Canada all have standards for public libraries. The Australian Public Library Alliance offers benchmarking calculators and the US Public Library Association (PLA) offers public library service data including outcome measures to its members. The PLA is putting a strategic emphasis on impact measurement and is due to launch its ‘Project Outcome’ with a three-year strategic plan in June 2015. Project Outcome includes a set of survey instruments, data entry and analysis tools, online training and support for libraries’ implementation and advocacy efforts.

An innovative ranking system has been developed for public libraries in Europe by two Scandinavian librarians, Berndtson and Öström. The Library Ranking Europe (LRE) will certainly improve the visibility of participating public libraries. The aim of the project is to stimulate benchmarking and the development of quality among some 65,000 European public libraries. Interestingly, the visibility of libraries is part of the criteria for ranking libraries from a scale of one to six using stars. More criteria may be accessed from the libraryranking.com website.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081001714000070

Employee development

Chris Rowley, Wes Harry, in Managing People Globally, 2011

4.12.4 Ranking

Here there is less discretion for the appraiser, who must place in rank order, from highest to lowest performing, all those employees being appraised. This is resisted as a method in parts of Asia, but it is found in some organisations, especially when used to rank foreign workers.

4.12.5 Forced distribution

Here appraisers rate people on a forced distribution of categories – for instance, 10 per cent low; 20 per cent low average; 40 per cent average; 20 per cent high average; 10 per cent high. This can be seen in Table 4.10. Again, the appraisers’ discretion is constrained with this method, so it is less used by Asian organisations, although international employers regularly use this system to compel local managers to make judgements about performance.

Table 4.10. Example of forced distribution

HighNextMiddleNextLow
10% 20%% 40%% 20%% 10%%
Names Names Names Names Names

4.12.6 Rating scales

Here various attributes of performance are listed (for example, accuracy, knowledge, quality of work) and the person is evaluated on each of these dimensions (usually from the job description) individually (see Chapter 2). A scale is often used – for example, 1 poor; 2 below average; 3 average; 4 above average; 5 excellent. An overall score is then calculated, so there is some ease of interpretation. An example of a 5-point rating scale can be seen in Table 4.11. When formal appraisal systems are used in Asia, this is one of the oldest and most popular methods.

Table 4.11. Example of rating scale

Scale
CriteriaPoorAverageGoodVery goodExcellent
Time keeping
Appearance
Communication skills
Relationships with subordinates
Relationship with seniors
Organisation skills

4.12.7 Critical incidents

This method is a procedure for collecting observed incidents that are seen as important or critical to performance. A list (or log) of incidents is compiled, with details of examples of positive and negative employee performance recorded and kept. High-performing employees are identified as those performing well during many critical incidents. It is considered that adequate, but not exceptional, employees are those involved in only a few critical incidents.

4.12.8 Behaviourally anchored rating scales (BARS)

These appraisal methods specify definite, observable and measurable behaviour. The format uses critical incidents to serve as ‘anchor statements’ on a scale. The form contains defined performance dimensions, each with critical incident anchors (examples of actual behaviour on the job, not general descriptions or traits). The appraiser then rates the person against these predetermined factors. An example can be seen in Table 4.12. These methods are used less often in some parts of Asia because of their complexity and need for judgement. However, the advantages of such methods include developing the following.

Table 4.12. Example of BARS performance dimension

Performance dimension scale development under BARS for the dimension ‘Ability to absorb and interpret policies for an employee relations specialist’ (Rated 1–9).

This employee relations specialist could be expected to:
9 Serve as information source concerning new and changed policies for others in the organisation
Be aware quickly of program changes and explain these to employees 8
7 Reconcile conflicting policies and procedures correctly to meet HRM goals
Recognise need for additional information to gain a better understanding of policy changes 6
5 Complete various HRM forms correctly after receiving instruction on them
Require some help and practice in mastering new policies and procedures 4
3 Know there is a problem, but go down many blind alleys before realising they are wrong
Incorrectly interpret guidelines, creating problems for line managers 2
1 Be unable to learn new procedures even after repeated explanations

Source: Adapted from DeCenzo and Robbins (1999)

Validity of each of the main duties (obtained from job descriptions)

Agreement over suitable descriptions for each category of behaviour

Economies of scale if many people have the same job description

4.12.9 Behavioural observation scales (BOS)

Like the above technique, this method uses critical incident techniques to identify a series of behaviours in the job. However, the format is different in that, instead of identifying behaviours exhibited during the rating period, the appraiser needs to indicate on a scale how often the person was actually observed engaging in the specific behaviour under review. An example of this method can be seen in Table 4.13. Because of its complexity this method is less common in some parts of Asia.

Table 4.13. Example of BOS performance dimension

Sample BOS items for the performance dimension ‘Communicating with subordinates’ (Rated 1–5).

CriteriaAlmost NeverAlmost Always
Puts up notices on bulletin boards when new policies or procedures are implemented. 1 2 3 4 5
Maintains eye contact when talking to employees. 1 2 3 4 5
Uses both written memos and verbal discussion when giving instructions. 1 2 3 4 5
Discusses changes in policies or procedures with employees before implementing them. 1 2 3 4 5
Writes memos that are clear, concise and easy to understand 1 2 3 4 5
Total Performance Level:
5–9: Below adequate
10–14: Adequate
15–19: Good
20 +: Excellent

Source: Adapted from Fisher et al. (1999)

4.12.10 Peer

With this method, colleagues and co-workers at the same level assess each other’s performance. This is increasingly popular in the West where team working has been encouraged. One advantage of this method is that it is based on actual experience in the workplace. In some Asian cultures, however, seeking the views of colleagues and co-workers at the same level risks losing power and respect. Asian group loyalty would lead almost all peers to rate each other highly.

4.12.11 Subordinate

With this method, employees are asked to rate their bosses. There have been experiments with this method at DuPont, Nabisco, Mobil, GE and UPS in the US, and BP in the UK. This method is seen as more ‘democratic’ and useful in improving channels of communication. In much of Asia there would be not only loss of ‘face’, but also loss of power, respect and loyalty in using such a system. Therefore, this method is rarely acceptable in parts of Asia.

4.12.12 Self-appraisal

One of the more recent methods to take off has been self-appraisal. To try to seek more differentiation in rankings with such formats, people can be asked to rank different aspects of their performance in relation to other aspects. In Asia it is found that there is a range of responses to self-appraisal so that it is expected that, for example, the Malaysian or Taiwanese would be modest in their assessment, while the Pakistani or Indian would be more generous in their self image. There are also variations by sector and in organisations so that, for example, investment bankers will be more inclined to rate themselves highly, while social workers are more inclined to say that they could do better.

4.12.13 360-degree appraisal

This is one of the latest trends in this area in the West. This method involves as many different people as possible in performance evaluation. This can range from subordinates to peers to managers, customers and clients. As most Asians are unwilling to be rated by peers or subordinates, this technique is less common in parts of Asia, although some international organisations attempt to impose the system.

4.12.14 Management by objectives (MBO)

This method was traditionally used more for professional and managerial grades, but is still found in many organisations at lower levels. This is seen particularly when performance management and assessment (PMA) is used. There is commonly a cycle, which may include the following stages.

1.

Discussion – forms are completed as the basis for initial discussions.

2.

Agreement – reached on objectives/goals to strive to achieve during the period.

3.

Training and development – needed for achievement of objectives.

4.

Modification – as a result of changed circumstances (corporate policy, environment).

5.

Review – at the end of the period to see if goals have been met and fresh goals set.

Better managed Asian organisations tend to use some form of MBO; those most concerned with improving performance use PMA.

4.12.15 Interviews

While we have seen that there are many types of performance appraisal, those that involve an interview at some stage are common. Interviews are an important part of the process, not least as they are integral to the communication and feedback that is often involved. Some of the same points that apply to employee selection interviews (see Chapter 2) can be followed as ground rules for performance appraisal interviews. These can be seen in Table 4.14. Problems are encountered, however, when evaluating the performance of an individual, irrespective of the interview style.

Table 4.14. Interview structure

StageCharacteristics
1. Preparation

Armed with all facts

Sure how to proceed

Clear purpose and aim

2. Purpose and rapport

Agree purpose

Agree structure of meeting

Check pre-work complete

3. Factual review

Review facts about performance in period

Appraiser reinforcement

4. Appraisee views

Asked to comment on performance

What has gone well/less well, liked/disliked

Possible new objectives

5. Appraiser views

Add own perspective

Recognition and constructive criticism

Questions about what said

6. Problem-solving approach

Discussion of differences

Discuss how to resolve

Consider developmental training needs

7. Objective setting

Agree what actions to be taken

Who takes them

Source: Adapted from Torrington and Hall (1998)

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843342236500043

Job Analysis and Work Roles, Psychology of

H. Dunckel, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 Classification Criteria

Job-analysis methods and techniques can be distinguished on the basis of various criteria (see also McCormick 1976): the unit of analysis, intended applications, theoretical foundation, method users and methods of data collection, methodological standards and results.

2.1 Unit of Analysis

The job-analysis unit is the job itself and the tasks that comprise it. For this reason, many job-analysis systems relate directly to the tasks and the immediate work conditions. More comprehensive job-analysis systems (see Dunckel 1999, Gael 1988) take into account the fact that essential features (e.g., the potential of the individual for defining goals self-reliantly) and results of the tasks become comprehensible only if the overall organizational conditions and the knowledge, skills and attitudes of the workers are taken into consideration. They therefore extend the analysis to include both the organizational conditions (e.g., degree of responsibility, codetermination potential, values and norms, leadership climate) and the working individuals with their respective knowledge, skills and attitudes.

Methods can be classified, based on widely accepted criteria, according to whether they are more job/task-oriented or worker/person-oriented. Job-oriented analysis is concerned with analyzing tasks, job conditions or job features, without regard for the concrete individuals involved and their different knowledge, skills, abilities and attitudes. Person-oriented analysis, on the other hand, is centred on the person and specifically concerned with differences between individual workers in the perception, interpretation and performance of the job.

Typical job-oriented approaches are: Functional Job Analysis, Task Inventories, Health Services Mobility Approach; typical person-oriented approaches are: Position Analysis Questionnaire, Critical Incident Technique, Ability Requirement Scales (see Gael 1988, Ghorpade 1988).

The distinction between job-oriented and person-oriented analysis is of conceptual and practical significance. In stress research, for example, it is conceptually important to begin by determining stress factors independently of the person involved (and their individual perceptions and coping behavior) in a job-oriented manner, then going on to examine how objectively identical stress factors are perceived and dealt with differently by different individuals and how they affect different people. However, the (person-oriented) analysis of these inter-individual differences means first determining objective stress factors, because we are concerned here with inter-individual differences in relation to objectively identical stress factors. This distinction is of practical significance when planning future workplaces for which there are not yet any workers. Here, the job-oriented approach is the only way of obtaining job analysis information.

2.2 Intended Applications

Techniques and methods have been and are being developed for different goals and potential applications. Job-analysis literature contains numerous different lists of goals and proposed uses for job-analysis information (see Ash 1988, Lees and Cordery 2000). Basically speaking, two goals complexes can be distinguished: work/job and organization design, and personnel development. More concrete aims of job analyses are, for instance, comparing work activities, changing and planning the work situation and organization, determining skill requirements and factors defining aptitude requirements, technology assessment, maintenance of industrial health and safety standards, and job evaluation.

Job-analysis methods and techniques play an important part in human-resources management and personnel selection (see Algera and Greuter 1998). There are used to determine more precisely the requirements a person must meet to perform their work tasks. These requirements may be specified as tasks to be accomplished, as behavioral requirements (e.g., required behavior or behavioral repertoire), as eligibility requirements (e.g., knowledge and skills) or as trait requirements (e.g., abilities and interests). For each of these requirement types, there are a number of techniques available (see Gael 1988).

If the results of such analyses are combined, job analysis can also be used to describe (work) roles as defined by Katz and Kahn (1978). According to these authors, roles are standardized behavior patterns demanded of all persons involved in a given functional relationship. Here, job analysis can also help identify when and under what conditions role expectations (of the different organization members with respect to individual workplace occupants) can lead to conflicts and job stress.

The intended applications of job analysis are not necessarily independent of one another. If, for example, the goal is to change the work organization, it will often be necessary to analyze the technological implications and the consequences for the workers involved as well.

Besides the application purpose, the application area also plays an important role. Job-analysis techniques and methods consider different levels and units of an organization. A distinction must be made between:

(a)

sectors (e.g., industrial, administrative, service);

(b)

levels of an organization (e.g., enterprise as a whole, business division, department, workplace group, workplace, work task);

(c)

professional groups (e.g., executives, specialized professional groups); and

(d)

activity classes (e.g., assembly, control and monitoring activities, administrative activities, service activities).

Depending on the specific concern, the emphasis will be on different information. Job analysis, too, faces the problem of breadth vs. depth. The more detailed the information, the more limited the application purpose. It may therefore be a good idea to proceed in several steps, starting with rough analyses to determine the key analysis areas, which are then analyzed in greater detail.

2.3 Theoretical Foundation

The theoretical foundation largely determines which information is captured at which level or with which analysis unit.

Work studies as defined by Taylor or Gilbreth (see Ghorpade 1988) are based on an additive movement (or motion) model. Techniques working on this basis thus attempt to define elementary movement units (e.g., grasping with the hand), combining these additively in order to then determine, say, the standard time required to perform the work.

Approaches rooted in behaviorism also attempt to identify elementary units (of behavior) (e.g., processing materials, recognizing optical differences). Their units are bigger, though; they are therefore not movement- but behavior-oriented.

These approaches, however, fail to take into account the fact that movements merge to form ‘wholes,’ are integrated in complex webs of activity and the regulating mental processes and representations and are codetermined by these. This is why many recent developments, especially in Germany, are based on the ‘action regulation theory’ (see Frese and Sabini 1985, Oesterreich and Volpert 1987), thus giving priority to questions relating to the psychological regulation of action, the level of psychological regulation, the completeness of actions, the degrees of freedom (Hacker 1998) or the scope for action or decision. The guiding idea here is that of humane work, i.e., work geared to human strengths and enabling individual workers to perform their job under permanently tolerable conditions, without impairment of their well-being and in a manner conducive to their personal development.

In the emphasis they place on characteristics such as scope for action, variability, identity and importance of the task, action-theory approaches are in keeping with the traditions of industrial sociology, e.g., the work of Turner and Lawrence (1965) and the work of Hackman and Oldham (1975) that builds on this. Since the latter approaches are specifically concerned with questions relating to the ‘motivation potential of work,’ they can also be classified as oriented to motivation theory.

Besides drawing on approaches based on behavior, action, and motivation theory, job-analysis systems also have recourse to concepts of stress theory, ergonomics and human engineering; in addition, worker-oriented approaches draw on concepts of personality theory.

2.4 Method Users

The main users of job-analysis methods are the workers themselves, first-level supervisors, higher-level supervisors, job analysts, technical experts and other company experts, but also works and staff councils.

Whether and to what extent a method can be used depends, among other things, on the application requirements that must be met by the users. Important factors here, besides formal qualifications, are the amount of experience the users need in analysis techniques, whether they can teach themselves how to use such techniques or whether special training is required.

2.5 Methods of Data Collection

There are also fundamental differences between techniques in terms of the data-collection methods used, e.g.:

(a)

interview methods (e.g., individual and group interviews, technical conferences with experts, more or less structured questionnaires and check lists);

(b)

observational methods (e.g., direct and indirect observation, continuous observation, work sampling);

(c)

analyses of company data (e.g., working hours lost, accident statistics, workplace descriptions);

(d)

analyses of documents (e.g., file analyses, form analyses);

(e)

work activities performed by job analysts.

Each of the methods has its advantages and disadvantages. It is therefore a good idea to combine several methods (see Gael 1988). These advantages and disadvantages can be highlighted by comparing interview and observation methods.

2.5.1 Interview methods

It is a good idea to interview workers, for they are the ones that know their own work activities best. Furthermore, such interviews are indispensable when the workers' subjective assessment of the work is needed or psychological processes are to be evaluated because these can only be accessed directly by introspection. In addition, interview methods, especially questionnaires, are frequently the method of choice because they are relatively easy to develop and use. Interview methods are the most frequently used job-analysis technique.

This should not, however, blind us to the fact that interview methods have a number of weaknesses. Some typical problems are:

(a)

comprehension problems of workers who are not so accustomed to dealing with the written language (e.g., in the case of questionnaires);

(b)

the ambiguity of everyday language;

(c)

difficulties in translating scientific terms into everyday language; and

(d)

the problem of putting into words many aspects of psychological regulation processes (Hacker 1998).

2.5.2 Observation methods

Observation methods are generally used in cases where it is important to avoid the sort of errors that can occur in interview methods or ‘bias’ as a result of evaluation and interpretation processes on the part of the workers, or when, in future workplace design, no workers are yet available for the planned jobs.

Observation methods are often seen as a way of getting round the problems inherent in interview methods and obtaining ‘more objective’ data. In reality, they are subject to the same sort of problems as interview methods, in some cases giving rise to additional problems:

(a)

the quality of job observations deteriorates for complex work activities.

(b)

certain temporally dynamic aspects of the work activity (e.g., pressure of time) are harder to observe.

(c)

infrequent events, which are nevertheless of significance for the job, (e.g., starting and stopping machines, annual accounts) are often not included.

(d)

observers, too, are subject to evaluation, interpretation and ‘biasing’ processes. For instance, observers tend to rate workplaces as uniformly good or bad.

These typical advantages and disadvantages mean that proper job analysis involves considering precisely which methods are suitable. This also means that users must be aware of the problems inherent in these methods, carrying out, where necessary, appropriate training measures to reduce them.

It is also a good idea—whenever this is feasible—to combine different methods, e.g., questionnaires, interview, and observation methods. For this reason, many techniques also include the observational interview as a proven data-collection method, based on structured observation of the work processes and related interviews with the workers involved at their workplace.

2.6 Methodological Standards

Job-analysis methods differ, among other things, in their degree of standardization—ranging from non-standardized ‘free’ descriptions to semi-standardized interviews to observations and interviews following exactly prescribed rules. They differ in the amount of time they take (from 30 minutes to several hours), their psychometric quality, the number of dimensions captured, etc. Which method to choose is not generally decidable; this depends largely on the intended application and on available knowledge and theory building with respect to the problems under investigation.

Scientifically based methods should be reliable and valid:

(a)

The reliability of a method shows whether and to what a extent a method can be used to obtain ‘stable,’ ‘reliable,’ or ‘replicable’ results. Ideally, repeated measurements of the same object should show as little deviation as possible (see Oesterreich and Bortz 1994).

(b)

The validity of a method indicates the extent to which it actually measures what it is supposed to measure.

Examining these quality criteria is not only of scientific interest, but also of utmost practical importance. If the results of the analysis are to have practical consequences, e.g., for job design, decision-makers must be able to depend on the fact that the results are reliable and valid.

2.7 Results

Job-analysis methods provide quantitative and qualitative results. Qualitative results are mostly verbal, narrative descriptions of the job or the tasks; quantitative results are presented in numerical—and in some cases graphical—form. Complex methods generally present results in both qualitative and quantitative form.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0080430767013978

Job Analysis and Work Roles, Psychology of

Heiner Dunckel, in International Encyclopedia of the Social & Behavioral Sciences (Second Edition), 2015

Classification Criteria

Job analysis methods and techniques can be distinguished on the basis of various criteria (also McCormick, 1976): unit of analysis, intended applications, theoretical foundation, method users and methods of data collection, methodological standards, and results.

Unit of Analysis

The job analysis unit is the job itself and the tasks that comprise it. For this reason, many job analysis systems relate directly to the tasks and the immediate work conditions. More comprehensive job analysis systems (Dunckel, 1999; Gael, 1988) take into account the fact that essential features (e.g., potential for defining goals self-reliantly) and results of the tasks only become comprehensible if the overall organizational conditions and the knowledge, skills, and attitudes of the workers are taken into consideration. They therefore extend the analysis to include both the organizational conditions (e.g., degree of responsibility, codetermination potential, values and norms, leadership climate) and the working individuals with their respective knowledge, skills, and attitudes.

Methods can be classified, based on widely accepted criteria, according to whether they are more job/task-oriented or worker/person-oriented (Brannick et al., 2007). Job-oriented analysis is concerned with analyzing tasks, job conditions, or job features, without regard for the concrete individuals involved and their different knowledge, skills, abilities, and attitudes. Person-oriented analysis, on the other hand, is centered on the person and specifically concerned with differences among individual workers in the perception, interpretation, and performance of the job. Typical job-oriented approaches are: functional job analysis, task inventories, health services mobility approach (Gael, 1988; Ghorpade, 1988); typical person-oriented approaches are: position analysis questionnaire, critical incident technique, ability requirement scales, cognitive task analysis (Gael, 1988; Ghorpade, 1988).

The distinction between job-oriented and person-oriented analysis is of conceptual and practical significance. In stress research, for example, it is conceptually important to begin by determining stress factors independently of the person involved (and their individual perceptions and coping behavior) in a job-oriented manner, then going on to examine how objectively identical stress factors are differently perceived and dealt with by different individuals and how they affect different people. However, the (person-oriented) analysis of these interindividual differences means first determining objective stress factors because we are concerned here with interindividual differences in relation to objectively identical stress factors. This distinction is of practical significance when planning future workplaces for which there are not yet any workers. Here, the job-oriented approach is the only way of obtaining job analysis information.

Intended Applications

Techniques and methods have been and are being developed for different goals and potential applications. Job analysis literature contains numerous different lists of goals and proposed applications for job analysis information (Ash, 1988; Lees and Cordery, 2000). Basically speaking, two complexes of goals can be distinguished: work/job and organization design, and personnel development. More concrete aims of job analyses are, for instance, comparing work activities, changing and planning the work situation and organization, determining skill requirements and factors defining aptitude requirements, technology assessment, maintenance of industrial health and safety standards, and job evaluation.

Job analysis methods and techniques play an important part in human resources management and personnel selection (Algera and Greuter, 1998; Brannick et al., 2007). They are used to determine more precisely the requirements a person must meet to perform his or her work tasks. These requirements may be specified as tasks to be accomplished, behavioral requirements (e.g., required behavior or behavioral repertoire), eligibility requirements (e.g., knowledge and skills), or trait requirements (e.g., abilities and interests). For each of these requirement types, there are a number of techniques available (Gael, 1988).

If the results of such analyses are combined, job analysis can also be used to describe (work) roles as defined by Katz and Kahn (1978) (see also Dierdorff and Morgeson, 2007). According to these authors, roles are standardized behavior patterns demanded of all persons involved in a given functional relationship. Here, job analysis can also help to identify when and on what conditions role expectations (of the different organization members with respect to individual workplace occupants) can lead to conflicts and job stress.

The intended applications of job analysis are not necessarily independent of one another. If, for example, the goal is to change the work organization, it will often be necessary to analyze both the technological implications and the consequences for the workers involved.

Besides the application purpose, the application area also plays an important role. Job analysis techniques and methods consider different levels and units of an organization. A distinction must be made between

sectors (e.g., industrial, administrative, service);

levels of an organization (e.g., enterprise as a whole, business division, department, workplace group, workplace, work task);

professional groups (e.g., executives, specialized professional groups); and

activity classes (e.g., assembly, control and monitoring activities, administrative activities, service activities).

Depending on the specific concern, the emphasis will be on different information. Job analysis, too, faces the problem of breadth versus depth. The more detailed the information, the more limited the application purpose. It may therefore be a good idea to proceed in several steps, starting with rough analyses to determine the key analysis areas, which are then analyzed in greater detail.

Theoretical Foundation

The theoretical foundation largely determines which information is captured at which level or with which analysis unit.

Work studies as defined by Taylor or Gilbreth (see Ghorpade, 1988) are based on an additive movement (or motion) model. Techniques working on this basis thus attempt to define elementary movement units (e.g., grasping with the hand), combining these additively in order to then determine, say, the standard time required to perform the work.

Approaches rooted in behaviorism also attempt to identify elementary units (of behavior) (e.g., processing materials, recognizing optical differences). Their units are bigger and hence they are therefore not movement-oriented but behavior-oriented.

These approaches, however, fail to take into account the fact that movements merge to form ‘wholes,’ are integrated in complex webs of activity and the regulating mental processes and representations, and are codetermined by these. This is why many developments, especially in Germany, are based on the ‘action regulation theory’ (Frese and Sabini, 1985; Oesterreich and Volpert, 1987), thus giving priority to questions relating to the psychological regulation of action, the level of psychological regulation, the completeness of actions, the degrees of freedom (Hacker, 1998), or the scope for action or decision. The guiding idea here is that of humane work, i.e., work geared to human strengths and enabling individual workers to perform their job under permanently tolerable conditions, without impairment of their well-being and in a manner conducive to their personal development.

In the emphasis they place on characteristics such as scope for action, variability, identity, and importance of the task, action theory approaches are in keeping with the traditions of industrial sociology, e.g., the work of Turner and Lawrence (1965) and the work of Hackman and Oldham (1975) that build on this. Since the latter approaches are specifically concerned with questions relating to the ‘motivation potential of work,’ they can also be classified as oriented to motivation theory (see Work Motivation).

Besides drawing on approaches based on behavior, action, and motivation theory, job analysis systems also have recourse to concepts of stress theory (see Workplace Stress), ergonomics, and human engineering (see Human Factors and Ergonomics); in addition, worker-oriented approaches draw on concepts of personality theory.

Method Users

The main users of job analysis methods are the workers themselves, first-level supervisors, higher-level supervisors, job analysts, technical experts and other company experts, but also works and staff councils.

Whether and to what extent a method can be used depend, among other things, on the application requirements that must be met by the users. Important factors here, besides formal qualifications, are the amount of experience the users need in analysis techniques, whether they can teach themselves how to use such techniques or whether special training is required.

Methods of Data Collection

There are also fundamental differences between techniques in terms of the data collection methods used, for example:

interview methods (e.g., individual and group interviews, technical conferences with experts, more or less structured questionnaires and checklists);

observational methods (e.g., direct and indirect observation, continuous observation, work sampling);

analyses of company data (e.g., working hours lost, accident statistics, workplace descriptions);

analyses of documents (e.g., file analyses, form analyses); and

work activities performed by job analysts.

Each of the methods has its advantages and disadvantages. It is therefore a good idea to combine several methods (Brannick et al., 2007; Gael, 1988). These advantages and disadvantages can be highlighted by comparing interview and observation methods.

Interview Methods

It is a good idea to interview workers, for they are the ones who know their own work activities best. Furthermore, such interviews are indispensable when the workers' subjective assessment of the work is needed or psychological processes are to be evaluated because these can only be accessed directly by introspection. In addition, interviews, especially questionnaires, are frequently the method of choice because they are relatively easy to develop and use. Interview methods are the most frequently used job analysis technique.

This should not, however, obstruct our view to the fact that interview methods have a number of weaknesses. Some typical problems are:

comprehension problems of workers who are not so accustomed to dealing with the written language (e.g., in the case of questionnaires);

the ambiguity of everyday language;

difficulties in translating scientific terms into everyday language; and

the problem of putting into words many aspects of psychological regulation processes (Hacker, 1998).

Observation Methods

Observation methods are generally used in cases where it is important to avoid the sort of errors that can occur in interview methods or ‘bias’ as a result of evaluation and interpretation processes on the part of the workers, or when, in future workplace design, no workers are yet available for the planned jobs.

Observation methods are often seen as a way of getting around the problems inherent in interview methods and obtaining ‘more objective’ data. In reality, they are subject to the same sort of problems as interview methods, in some cases giving rise to additional problems:

The quality of job observations deteriorates for complex work activities.

Certain temporally dynamic aspects of the work activity (e.g., pressure of time) are harder to observe.

Infrequent events, which are nevertheless of significance for the job, (e.g., starting and stopping machines, annual accounts) are often not included.

Observers, too, are subject to evaluation, interpretation, and ‘biasing’ processes. For instance, observers tend to rate workplaces as uniformly good or bad.

These typical advantages and disadvantages mean that proper job analysis involves considering precisely the methods which are suitable. This also means that users must be aware of the problems inherent in these methods, carrying out, where necessary, appropriate training measures to reduce them.

It is also a good idea – whenever this is feasible – to combine different methods, e.g., questionnaires, interview, and observation methods. For this reason, many techniques also include the observational interview as a proven data collection method, based on structured observation of the work processes and related interviews with the workers involved at their workplace.

This is also reasonable and required from a methodological point of view as a number of surveys show that data provided by incumbents have a significantly lesser reliability than data collected by analysts and experts (Dierdorff and Wilson, 2003; Voskuijl and Sliedregt, 2002; see also Dunckel and Resch, 2010).

Methodological Standards

Job analysis methods differ, among other things, in their degree of standardization – ranging from nonstandardized ‘free’ descriptions to semistandardized interviews to observations and interviews following exactly prescribed rules. They differ in the amount of time they take (from 30 min to several hours), their psychometric quality, the number of dimensions captured, etc. Which method to choose cannot be decided in a general manner; this depends largely on the intended application and on available knowledge and theory building with respect to the problems under investigation.

Scientifically based methods should be reliable and valid:

The reliability of a method shows whether and to what extent a method can be used to obtain ‘stable,’ ‘reliable,’ or ‘replicable’ results. Ideally, repeated measurements of the same object should show as little deviation as possible (Oesterreich and Bortz, 1994).

The validity of a method indicates the extent to which it actually measures what it is supposed to measure.

Examining these quality criteria is not only of scientific interest, but also of utmost practical importance. If the results of the analysis are to have practical consequences, e.g., for job design, decision makers must be able to depend on the fact that the results are reliable and valid.

Results

Job analysis methods provide quantitative and qualitative results. Qualitative results are mostly verbal, narrative descriptions of the job or the tasks; quantitative results are presented in numerical – and in some cases graphical – form. Complex methods generally present results in both qualitative and quantitative forms.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080970868220096

Measuring workplace bullying

Helen Cowie, ... Beatriz Pereira, in Aggression and Violent Behavior, 2002

CIT (Flanagan, 1954; Lewis, 1992) is a job analysis technique that focuses participants on a particular scenario. Flanagan used the method to analyze failure in military flying training during the Second World War. Liefooghe and Olafsson (1999) used Lewis's adaptation of CIT by asking participants to describe hypothetical individuals who are ‘extremely like … bullies’ and those who are ‘not at all like … bullies.’ The aim of the research was to investigate bullying as a social and cultural phenomenon, that is, one that was experienced not only by a ‘bully’ and a ‘victim,’ but also by a whole group or even by a culture at large. Forty participants (university staff and students) were interviewed in focus groups in order to elicit representations of the phenomenon through discussion. The researchers claim that through the process of exploring people's representations of bullying at work they were able to gain understanding of the different individual and organizational factors that influence the emergence of the concept of bullying into the social domain.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1359178900000343

Children's participation in consultations and decision-making at health service level: A review of the literature

Imelda Coyne, in International Journal of Nursing Studies, 2008

Similarly other research studies suggest that health professionals may experience difficulty-facilitating children's participation in decision-making for various reasons (Runeson et al., 2001). Using the critical incident technique, Runeson et al asked 92 Swedish health professionals (81 nurses, 8 doctors, 2 play therapists, and 1 psychologists) to write about a situation in which a child (0–15) were allowed or not allowed to participate in decision-making. They found that several factors influence whether children voices were heard such as: child's protest, child's age and maturity, role of parents, attitudes of staff, time factor, and alternative solutions to the problem. Miller (2001) found that although nurses (n = 8) viewed children's involvement as an integral part of the nurse's role, they found it as the most challenging part because of issues related to truth telling (truth could be frightening and damaging) and consent (balance between free will and persuasion). Miller suggests that to facilitate children's involvement in decision-making, the nurses needed to know the individual and the context, provide individuals with age-appropriate information and take account of the ethical, legal and professional dimensions.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0020748908001466