What is the ability determining logical sequence in a problem and solving it accordingly?

Nature of the Resistive Switching Phenomena in TiO2 and SrTiO3

Krzysztof Szot, ... Wolfgang Speier, in Solid State Physics, 2014

4.6 Voids

Before proceeding in the logical sequence of our paper and discussing the hierarchical order of the defects with the analysis of the surface as a two-dimensional (planar) defect, we want to draw attention at this point to the fact that very complicated three-dimensional defects can exist in the bulk of real single crystals. A simple optical inspection of Verneuil-grown crystal boules already reveals growth bands and voids (bubbles) with their radius increasing in the direction of the edge of the crystal (see Fig. 4.35). A TEM analysis provided similar results about the distribution of micro- and nanosized voids giving a radius ranging from several μm to 10 nm [211]. Using a higher TEM magnification, it can be found that the small voids are additionally accompanied by line defects. There is a dislocation loop—a curved dislocation which ends in itself. According to Wang et al. [211], heating of SrTiO3 crystals (at ~ 400 °C) or irradiation with electrons leads to the hopping of small dislocation loops from one position to another. This hopping is often accompanied by the movement of small bubbles.

What is the ability determining logical sequence in a problem and solving it accordingly?

Figure 4.35. Optical inspection shows the change in the density of voids (bubbles) in different locations of the crystal. Close to the edge of the boule, the concentration of the voids decreases. The small concentric bands in the bright parts of the crystal are so-called growth bands.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128001752000042

Game Theory

Guillermo Owen, in Encyclopedia of Physical Science and Technology (Third Edition), 2003

I.A Extensive and Normal Forms

The extensive form of a game shows the logical sequence of moves, the information (or lack thereof) available to players as they move, and the payoff following each play of the game. As an example, consider the following (very rudimentary) form of poker. Each of two players antes $1. Player 2 is given a King; Player 1 is given a card from a deck consisting of one Ace and four Queens. (It is assumed that each card has probability 0.2 of being chosen.) At this point Player 1, seeing her card, has a choice of betting $2 or passing. If Player 1 passes, the game ends immediately. If player 1 bets, then 2 has the choice of folding or calling the bet. If 2 folds, then 1 wins the pot; otherwise there is a showdown for the pot, where Ace beats King and King beats Queen.

The extensive form of this game is shown inFig. 1. The game starts at node A, with a random move (the shuffle). Player 1 must then bet or pass (nodes B and C). If Player 1 bets, then it is player 2's turn to call or fold (nodes E and F). The remaining four nodes (D, G, H, and J) are terminal nodes, with payoffs of either 1 or 3 from one player to the other. The reader should take note of the shaded area joining E and F: this is supposed to denote the fact that, at that move, Player 2 is unsure as to his position (i.e., he does not know whether Player 1 has an Ace or a Queen). Contrast this with the situation at nodes B and C, where Player 1 knows which card he has.

What is the ability determining logical sequence in a problem and solving it accordingly?

FIGURE 1.

It has been found that games are best analyzed by reducing them to their strategies. A strategy, as the word is used here, is a set of instructions telling a given player what to do in each conceivable situation. Thus, inFig. 1, Player 1 has four strategies, since he has two possible choices in each of two possible situations. These strategies are BB (always bet), BP (bet on an Ace, pass on a Queen), PB (pass on an Ace, bet on a Queen), and PP (always pass). Player 2 has only the two strategies, C (call) and F (fold). This is due to the fact that he cannot distinguish between nodes E and F.

It may be noticed that, in a game with no chance moves, the several players' strategies will determine the outcome. In a game with chance moves, the strategies do not entirely determine the outcome. Nevertheless, an expected payoff can be calculated. The normal form of a game is a listing of all the players' strategies, together with the corresponding (expected) payoffs.

In the poker game ofFig. 1, suppose Player 1's strategy is BB, while Player 2's strategy is F. In that case, Player 1 will always win the antes, so the payoff is +1 (Player 1 wins the dollar that Player 2 loses). If Player 1 plays BB while Player 2 chooses C, then Player 1 has a 0.8 probability of losing $3, and a 0.2 probability of winning $3. Thus 1's expected payoff is 0.8(−3) + 0.2(3) = −1.8. Other payoffs are calculated similarly, giving rise to the 4 × 2 matrix shown inFig. 2: the four rows are Player 1's strategies, while the columns are Player 2's strategies. This matrix is the normal form of the game.

What is the ability determining logical sequence in a problem and solving it accordingly?

FIGURE 2. A normal form for the rudimentary poker game.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122274105002726

Concurrent Computing

Rajkumar Buyya, ... S. Thamarai Selvi, in Mastering Cloud Computing, 2013

6.2.1 What is a thread?

A thread identifies a single control flow, which is a logical sequence of instructions, within a process. By logical sequence of instructions, we mean a sequence of instructions that have been designed to be executed one after the other one. More commonly, a thread identifies a kind of yarn that is used for sewing, and the feeling of continuity that is expressed by the interlocked fibers of that yarn is used to recall the concept that the instructions of thread express a logically continuous sequence of operations.

Operating systems that support multithreading identify threads as the minimal building blocks for expressing running code. This means that, despite their explicit use by developers, any sequence of instruction that is executed by the operating system is within the context of a thread. As a consequence, each process contains at least one thread but, in several cases, is composed of many threads having variable lifetimes. Threads within the same process share the memory space and the execution context; besides this, there is no substantial difference between threads belonging to different processes.

In a multitasking environment the operating system assigns different time slices to each process and interleaves their execution. The process of temporarily stopping the execution of one process, saving all the information in the registers (and in general the state of the CPU in order to restore it later), and replacing it with the information related to another process is known as a context switch. This operation is generally considered demanding, and the use of multithreading minimizes the latency imposed by context switches, thus allowing the execution of multiple tasks in a lighter fashion. The state representing the execution of a thread is minimal compared to the one describing a process. Therefore, switching between threads is a preferred practice over switching between processes. Obviously the use of multiple threads in place of multiple processes is justified if and only if the tasks implemented are logically related to each other and require sharing memory or other resources. If this is not the case, a better design is provided by separating them into different processes.

Figure 6.2 provides an overview of the relation between threads and processes and a simplified representation of the runtime execution of a multithreaded application. A running program is identified by a process, which contains at least one thread, also called the main thread. Such a thread is implicitly created by the compiler or the runtime environment executing the program. This thread is likely to last for the entire lifetime of the process and be the origin of other threads, which in general exhibit a shorter duration. As main threads, these threads can spawn other threads. There is no difference between the main thread and other threads created during the process lifetime. Each of them has its own local storage and a sequence of instructions to execute, and they all share the memory space allocated for the entire process. The execution of the process is considered terminated when all the threads are completed.

What is the ability determining logical sequence in a problem and solving it accordingly?

Figure 6.2. The relationship between processes and threads.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124114548000061

Data, Information, and Knowledge

Eduardo Gelbstein, in Encyclopedia of Information Systems, 2003

V.A. Things That Cannot Be Expected from Computers

Computers as they are known around the year 2000 are able to process algorithms and logical sequences. They have, at best, crude sensory and artificial intelligence capabilities. As such they cannot understand context or situations.

Therefore, DDI are not necessarily “right” because they are delivered by a computer, and the user of these must be fully aware of the quality issues discussed in this article.

It is critically important that any DDI be relevant to the specific purpose for which they are collected. DDI collected for one purpose may be totally inappropriate for a different purpose.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404000320

Accounting Measures

Michael Bromwich, in Encyclopedia of Social Measurement, 2005

The Balance Sheet

This shows the details of the firm's assets, such as land, property, and equipment, and its liabilities in a logical sequence. An asset is an identifiable item that has been either acquired or constructed in-house in the past, is expected to yield future cash flows, and is in the control of the firm in the sense that the firm may decide whether and how to dispose of the asset or use it. The accounting approach to valuing items is a bottom-up approach with which each item is valued individually using a method appropriate to that item and these values are aggregated to obtain valuations for classes of assets and liabilities and overall asset and liability values that will generally incorporate different valuation bases for different classes of accounting items.

Intangible assets are expected to generate future cash flows but have no physical form, such as items of intellectual property, which include patents, licenses, trademarks, and purchased goodwill (the amount by which the price of an acquisition of a company exceeds the net value of all the other assets of the acquired company). Many intangibles obtained in an acquisition are allowed on the balance sheet valued at what is called their “fair value”: what they would exchange for in an arm's length transaction by willing parties. Such fair values are not generally allowed to be changed to reflect changes in market prices over time. The inclusion of such intangibles is in contrast to the unwillingness to extend this treatment to either internally generated goodwill, which reflects the firm's value-added operations, or internally created intangible assets, such as brands. Often in a modern, growing, high-tech firm, the values of such intangible assets may dominate physical assets. The possibilities for inclusion would be very large, including self-created brands, information technology systems, customer databases, and a trained work force. Generally, regulators have resisted attempts to treat such items as assets. Proponents of inclusion argue that this means that many of the sources of company value go unrecognized in the accounts and must be treated as costs in the year incurred. It is argued that this is one reason the accounting value of firms may fall far below their stock exchange values, especially for high-tech companies and companies in a growth phase. This is true even today; compare the accounting and stock market values of Microsoft. This makes empirical work with accounting numbers very difficult. Where intangibles are important in such exercises, they must be estimated separately.

The value of an asset is either based on its original cost or in most accounting regimes, including the United Kingdom but not the United States, based on a revalued amount taking into account changes in prices over time. Any gains on holding assets increase the value of the firm in its balance sheet and not its operating profit, though such gains or losses are shown elsewhere in the profit-and-loss statement. Firms do not have to revalue assets and many firms may use different valuation bases for different asset classes. Any revaluation must be based on independent expert authority and such independent assessments must be obtained regularly. By not revaluing assets in the face of favorable price changes, firms build up hidden reserves of profits realized when the asset is sold or when used in production. Accounting regulators have acted to stop asset overvaluation by making assets subject to a test that their values in the accounts do not exceed either their selling value (which may be very low where assets are specific to the firm) or their value when used by the firm (an estimate by management that may be highly subjective).

It is generally agreed for useful empirical work that assets must be restated at their current market prices minus adjustments for existing depreciation and the assets' technologies relative to that currently available on the market. One example of the use of current cost is in calculating what is called Tobin's Q, the ratio between the firm's stock market value and the current cost of its net assets. One use of Tobin's Q is to argue that firms with a Q ratio of greater than 1 are making, at least temporarily, superprofits.

During the inflationary period of the mid-1970s to early 1980s, most leading accounting regimes allowed at least supplementary disclosure of generally reduced profits after allowing for current cost (adjusted replacement cost) depreciation and a balance sheet based on the increased current cost of assets. This movement fizzled out with the decline in inflation, the lack of recognition of current cost profits by tax authorities, managerial unhappiness with the generally lower profits disclosed by this type of accounting, and the need to maintain the current value of existing assets. More traditional accountants also argued that the revaluing of assets meant that profits would be reported (as an increase in the firm's value in the balance sheet) on revaluation and not when profits were realized either by use or by sale. Despite this, many experts believe that it is the intention of some accounting regulators (ASB and IASB) to move toward current cost accounting.

Regulators in most countries have acted and are planning to act further on off-balance items. Leased operating assets, such as airplanes, ships, and car fleets, do not have to appear on the balance sheet and only the annual cost of such leasing appears in the income statement. This understates the firm's capital employed often by vast amounts and thereby inflates returns on investment. This is planned to change shortly. Other ways of maintaining assets on the off-balance sheet include setting up a supposedly independent entity that owns the assets and charges the “parent” firm for their use. Similar approaches are available to keep debt and other financial liabilities out of the financial statements.

The treatment of financial assets (those denominated in monetary terms, which include financial instruments) differs between accounting regimes and classes of financial assets. Long-term loans to and investments in other companies not held for sale are generally shown at their original cost. Most other financial assets and liabilities are usually valued at their market prices, with gains and losses usually going to the balance sheet until realized by sale. Almost all regulators are agreed that all financial assets and liabilities should be shown at their market prices or estimated market prices, where such prices do not exist, which is the case for complex financial instruments (including many derivatives). Any gains and losses would go to the profit-and-loss account. Using market prices means that there will be no need for what is called “hedge accounting.” This accounting means that investments to cover other risks in the firm are kept at their original cost until the event that gives rise to the risk occurs, thereby hiding any gains or losses on the hedges. The aim is to let accounting “tell it as it is,” under the view that market prices give the best unbiased view of what financial items are worth. There are problems here. The approach treats some financial items differently than other items, such as physical assets and intangibles; it records gains and losses before they are realized by sale or by use, whereas this not allowed for other items; and it abandons to a degree the prudence and matching concepts and, perforce, often uses managerial estimates for complex items not traded on a good market, which conflicts with the wish of regulators to minimize managerial discretion.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0123693985004709

Greek, Indian and Arabic Logic

Charles Burnett, in Handbook of the History of Logic, 2004

Averroes’ intention in the Middle Commentaries is to paraphrase Aristotle's text (without directly quoting it), in a way that both brings out the logical sequence of Aristotle's arguments (hence his use of the ‘Porphyrian tree’ for the arrangement of the subject matter in these commentaries), and makes the subject matter intelligible to an Arabic audience.18

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S1874585704800104

Handbook of Chemometrics and Qualimetrics: Part B

B.G.M. Vandeginste, ... J. Smeyers-Verbeke, in Data Handling in Science and Technology, 1998

34.3.1 Evolving factor analysis (EFA)

As previously discussed, data sets can in principle be factor-analyzed columnwise or row-wise. The fact that the rows follow a certain logical sequence may be exploited for finding the pure row factors. We illustrate this on a four-component chromatogram given in Fig. 34.25. The compounds are present in well-defined time windows, e.g compound A in window t1 − t5, compound B in window t2 − t6, compound C in window t3 − t7 and compound D in window t4 − t8. The ordering provides additional information and this can be exploited to estimate the pure spectra from which the respective concentration profiles are estimated. Therefore, sub-matrices are formed by adding rows to an initial top sub-matrix, T0 top down, or by adding rows to a bottom sub-matrix, B0 bottom up. In our example the rank of these matrices increases from one to four as is schematically shown in Fig. 34.25. By analysing these ranks as a function of the number of added rows, time windows are derived where one, two, three, etc. significant PCs, are present. Such an analysis of the rank as a function of the number of rows of X included in the principal components analysis is the principle of evolving principal components analysis (EPCA), which is the first step of evolving factor analysis (EFA) developed by Maeder et al. [17–19]. As explained in Chapter 17, an eigenvalue is associated with a principal component. This eigenvalue expresses the amount of variance described by the eigenvector. By determining the eigenvalues or variances associated with each PC for each sub-matrix and plotting these eigenvalues (or the log of the eigenvalues) as a function of the number of rows ni included in the sub-matrix, a typical plot, dependent on the studied system is obtained. Figure 34.26 shows the plot obtained for a HPLC-DAD data set. This figure can be interpreted as follows. For each sub-matrix a new significant eigenvector appears each time a new compound is introduced in the spectra, thus at t = t1, t2, t3 and t4. This is observed from an increase of the eigenvalue. From this plot it is not yet possible to derive the compound windows, as this plot indicates the appearance of a new compound but not its disappearance. Therefore, a second EPCA is carried out, but now in the reversed order, i.e. one starts with the initial matrix B0 and adds rows bottom up. A similar plot (see Fig. 34.27) is now obtained but in the reversed order. New factors appear at t = t8, t7, t6 and t5. In the forward direction it means that components disappear at t = t5, t6, t7 and t8. Because the widths of elution bands in HPLC in a narrow region of retention times are more or less equal, the compound which appears first in the spectra should also be the first one to disappear. Therefore, the compound windows are found by connecting the line indicating the first appearing compound in Fig. 34.26 with the last appearing compound in Fig. 34.27. Both figures can be combined in a single figure (see Fig. 34.28), from which the concentration windows can be reconstructed as indicated. Tauler and Casassas [20] applied this technique to reconstruct the concentration profiles in equilibria studies. One could consider these bell-shaped profiles of the eigenvalues as being a first rough estimate of the concentration profiles C in eq. (34.3). It is now possible to calculate the pure spectra S in an iterative way by solving the eq. (34.3) first for S (by a least-squares method). Because C is a first approximation, the spectra have negative values, which can be set to zero. In a next step the corrected spectra are used to solve eq. (34.3) for C. These steps are repeated until S and C are stable. This is the principle of alternating regression, also called alternating least squares, introduced by Karjalainen [21]. This iterative resolution method has been successfully applied to many different analytical data, including chromatographic examples (see recommended additional reading).

What is the ability determining logical sequence in a problem and solving it accordingly?

Fig. 34.25. Time windows in which compounds are present in a composite 4-component peak, with the ranks of the data matrices formed by adding rows to a top matrix T0 (top down) or to a bottom matrix B0 (bottom up).

What is the ability determining logical sequence in a problem and solving it accordingly?

Fig. 34.26. Eigenvalues when adding rows to T0 (forward evolving PCA).

What is the ability determining logical sequence in a problem and solving it accordingly?

Fig. 34.27. Eigenvalues when adding rows to B0 (backward evolving PCA).

What is the ability determining logical sequence in a problem and solving it accordingly?

Fig. 34.28. Reconstructed concentration profiles from the combination of Figs. 34.26 and 34.27.

An alternative and faster method estimates the pure spectra in a single step. The compound windows derived from an EPCA, are used to calculate a rotation matrix R by which the PCs are transformed into the pure spectra: X = C ST = T* R R− 1 V*T. Consequently, C = T* R, where T* is the score matrix of X. Focusing on a particular component, ci, (ith column of the matrix C) one can write ci = T* ri, where ri, is the ith column of R. Because compound i is not present in the shaded areas of ci (see Fig. 34.29), the values of ci in these areas are zero. This allows us to calculate the rotation vector ri. Therefore, all zero rows of ci are combined into a new column vector, ci0, and the corresponding rows of T* are combined into a new matrix Ti* 0. As a result one obtains:

What is the ability determining logical sequence in a problem and solving it accordingly?

Fig. 34.29. Construction of a sub-matrix containing only the zero rows of ci.

(34.10)ci0=Ti*0ri=0

The procedure is schematically shown in Fig. 34.29. Equation (34.10) represents a homogeneous system of equations with a trivial solution ri = 0. Because component i is absent in the concentration vector, this component does not contribute to the matrix Ti* 0. As a consequence the rank of Ti* 0 is one less than its number of rows. A non-trivial solution therefore can be calculated. The value of one element of ri is arbitrarily chosen and the other elements are calculated by a simple regression [17]. Because the solution depends on the initially chosen value, the size (scale) of the true factors remains undetermined. By repeating this procedure for all columns ci (i = 1 to p), one obtains all columns of R, the entire rotation matrix.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0922348798800440

Knowledge flows and graphic knowledge representations

Giorgio Olimpo, in Technology and Knowledge Flow, 2011

Vector of knowledge

Communication should be easy, efficient and unambiguous. Natural language alone is not an efficient means for communicating structured knowledge because of its intrinsic nature which is serial (words and concepts must flow as a logical and temporal sequence) and evocative (the same sentence may evocate different meanings in different human receivers). Serial communication does not provide directly holistic perspectives (which are a key factor for knowledge flows); rather it leaves the task of building those perspectives to the receiver. Evocative communication necessarily implies a considerable degree of ambiguity because different receivers may have different cognitive reactions to the same message. Without excluding natural language, a wise use of knowledge representations may significantly contribute to overcoming its above-mentioned limitations. Representations are built in terms of artificial languages which may be able to provide directly a global picture of the thing being represented (this is especially true for graphic languages). Besides, in most cases, the ontological components of representation languages have a formal or quasi-formal definition which favours a more focused cognitive reaction in the target receiver.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781843346463500058

Data parallelism

Maurice Herlihy, ... Michael Spear, in The Art of Multiprocessor Programming (Second Edition), 2021

The second approach is based on stream programming, a programming pattern supported by a number of languages (see the chapter notes). We use the interface provided by Java 8. A stream2 is just a logical sequence of data items. (We say “logical” because these items may not all exist at the same time.) Programmers can create new streams by applying operations to elements of an existing stream, sequentially or in parallel. For example, we can filter a stream to select only those elements that satisfy a predicate, map a function onto a stream to transforms stream elements from one type to another, or reduce a stream to a scalar value, for example, by summing the elements of a stream or taking their average.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124159501000276

Human-Computer Interaction for Medical Visualization

Bernhard Preim, Charl Botha, in Visual Computing for Medicine (Second Edition), 2014

5.2.3 Representations of Task Analysis

Task analysis yields a wealth of data that needs to be filtered, structured, prioritized and consolidated before concise results can be extracted. Audio recordings from interviews or “think aloud” sessions, hand-written notes, and schematic drawings of workplaces or tasks are typical examples of the collected data. Filtering, of course, is a highly sensitive task. It is essential to strongly reduce the amount of data before further analysis. However, wrong decisions in the filtering process necessarily lead to incomplete task analysis results. Other problems, such as an inappropriate structure or prioritization, are likely to be detected later.

There are different representations that are frequently used to convey task analysis results:

hierarchical task analysis (HTA) where a task is (at multiple levels) decomposed into subtasks,

workflows that capture the flow of data and information, and

scenarios that are semi-informal representations providing also background information on the importance of specific parts of a solution.

In the following, we focus on workflows and scenarios, since these representations have been used and refined for medical visualization applications frequently.

5.2.3.1 Workflow Analysis

Workflow analysis and redesign is a core activity in business informatics where business processes should be designed, evaluated and optimized.

Definition 5.1

Workflows represent a process as a graph representations which contain actions or events (nodes in the graph) and their logical sequence (edges in the graph). Workflows may contain variants and may emphasize typical sequences of actions.

Workflows in Medicine The design of medical visualization may borrow from these experiences, notations and tools to identify such workflows and thus to characterize diagnosis, treatment planning, interventional procedures and outcome control. As a first step for understanding a workflow in medicine, it is highly recommended to look for official guidelines of the medical societies. Such guidelines describe in which stages of a disease a particular treatment is justified, how this treatment should be accomplished (including variants) and how (often) treatment success should be verified, e.g., with control examinations in certain intervals. In medical publications, amongst others patient workflow, diagnostic workflow, administrative workflow, anesthetic workflow and surgical workflow are discussed [Neumuth, 2011]. Image data and advanced visualization are relevant for only some of these workflows. In the following, we focus on surgical workflows, since there is a substantial and well-documented experience with the acquisition and exploitation of workflows.

Surgical Workflow Analysis Due to individual patient conditions and different capabilities and preferences of surgeons the variability of workflows is considerable. Different surgical schools add another level of variability that is significantly higher than in standardized industrial production processes.

Not every minor variation needs to be explicitly represented – often elementary workflows may be generalized. Workflows may also encode how often certain procedures occur, and how much time they take—information that is crucial in deciding which processes may be improved by computer support [Neumuth et al., 2006].

Top-Down and Bottom-Up Workflow Analysis Workflows may be derived in a top-down manner based on interviews with surgeons. These workflows have a rather low resolution but capture the experience of surgeons. On the other hand, workflows may be derived by precise measurements in the Operating Room (OR) where the use of tools, OR equipment and information is recorded (partly manually, see Fig. 5.3, partly automatically by means of various sensors). These bottom-up workflows are more detailed and contain quantitative information [Neumuth, 2011]. However, the effort to generate such workflows is considerable and includes a careful interpretation of measured and recorded events. Combinations of top-down and bottom-up approaches are possible, e.g., after a high-level workflow is determined in a top down manner, the observation of several instances of surgery serves to verify and refine that workflow. In general, workflows describe processes at various levels, thus allowing an analysis at different levels of granularity. Figure 5.4 illustrates these levels graphically for a cardio-vascular surgery. Specialized editors support the creation of workflows.

What is the ability determining logical sequence in a problem and solving it accordingly?

Figure 5.3. A workflow for a surgical intervention in cardiology resulting from careful observations (left with a tablet PC and dedicated software) in the operating room.

(Based on a courtesy of Thomas Neumuth, ICCAS Leipzig)

What is the ability determining logical sequence in a problem and solving it accordingly?

Figure 5.4. A high-level workflow and a selected refinement of one step is presented. This workflow represents a coronary arterial bypass surgery—an essential intervention in cardiac surgery.

(Based on a courtesy of Thomas Neumuth, ICCAS Leipzig)

Discussion The formal character of this representation is a benefit that clearly supports the software development process. However, since this notation is not familiar to medical doctors, workflows are not always useful for discussions with them. As a remedy, the language used in workflows to characterize different states and transitions should be carefully chosen to reflect the proper use of medical terminology. Neumuth [2011] emphasizes the need for a common language between workflow analysts and doctors.

Also, at different sites or even among different doctors at one site, there might be huge differences in their specific workflows (unlike in manufacturing and administrative procedures, medical treatment is and must be more individualized with respect to the patient and the medical doctor). Workflow diagrams can hardly represent that variability, but are often restricted to a somehow averaged instance. Finally, a workflow diagram abstracts from important aspects, such as the motivation for and relevance of some steps. Thus, the rich picture which results from task analysis must be strongly simplified to yield a workflow description.

With respect to medical visualization, the use of microscopes, navigation data, display facilities, such as ultrasound is essential to derive requirements, in particular for intraoperative visualizations.

For more information and successful examples of workflow acquisition in surgery, readers are referred to [Blum et al., 2010, Jannin et al., 2003, Neumuth et al., 2009, Neumuth et al., 2011, Padoy et al., 2010.] Padoy et al. [2010] present and discuss neurosurgical workflows and carefully discuss how a standardized terminology is incorporated. Neumuth et al. [2011] discuss how a “mean,” workflow may be derived using 102 cataract interventions from eye surgery as example. Workflow analysis with the goal to identify the current stage of surgery, to predict the next steps and to support it directly, e.g., by adjusting endoscopes’s field of view or OR lightning is a promising research area to better support surgeons. At least, some surgical interventions, e.g., cataract surgery are sufficiently standardized for this kind of support.

In fact, they (and other authors) suggest to combine workflows with ontologies that also represent relations, such as a lung has lobes. An important recent refinement of workflow notation is to derive a few representative workflows from many observations and add percentages to the different steps and transitions between states explicitly representing how likely certain workflows are.

5.2.3.2 Scenario-Based Design

Scenarios are now widely used in HCI, in particular to characterize and envision radically new software systems [Rosson and Carroll, 2003].

Definition 5.2

Scenarios are natural language descriptions of a process that include statements about which technology or feature is used for which purpose. The stakeholders are explicitly mentioned. They contain different perspectives as well as motivations from users.

Scenarios are more open to interpretation, which may be considered a drawback. However, they are clearly useful as a basis to discuss with medical doctors. Although scenarios are natural-language descriptions, they follow a certain structure and contain certain elements at a minimum. Thus, they may be characterized as semi-formal representations.

In three projects at the University of Magdeburg, scenario descriptions have been used and discussed within the development team and with medical doctors resulting in a large corpus of descriptions, annotations and refined descriptions [Cordes et al., 2009]. Figure 5.5 shows different types of scenarios and their relations as they have been used for liver surgery training, neck surgery planning and minimally-invasive spine surgery training. This scheme is a refined version of a process originally described by Benyon et al. [2005]. According to this scheme, initially a set of user stories is created to describe essential processes from a user’s perspective in natural language. User stories include explicit statements of expectations and preferences. After discussion and refinement, the user stories are refined to conceptual scenarios that abstract from expectations and preferences, and may summarize user stories. Concrete scenarios are derived to precisely describe how the interaction should be performed and how the system responds. Thus, a conceptual scenario might include a statement such as “vascular structures in the vicinity of the tumor are emphasized.” A concrete scenario must describe the specific emphasis technique used. Finally, use cases are described in a more formal manner. The use cases provide dense information without background information or motivation. Use cases are a part of UML (Unified Modeling Language) and play an essential role in modern software engineering. Thus, by including use cases, a link between user interface and (classical) software engineering is provided.

What is the ability determining logical sequence in a problem and solving it accordingly?

Figure 5.5. To envision a system, high-level user stories are refined stepwise by providing detail on how a function should be performed and by considering constraints from the context of the intended system use. The links between the documents need to be managed

(Courtesy of Jeanette Mönch, University of Magdeburg)

The following is a short portion from a user story and the derived scenarios for a SpineSurgeryTrainer (see Fig. 5.6 and [Cordes et al., 2008]):

What is the ability determining logical sequence in a problem and solving it accordingly?

Figure 5.6. A screenshot from a training system for minimally-invasive spine surgery where needle placement should be trained (Courtesy of Jeanette Mönch, University of Magdeburg. See also [2008]).

User Story: The doctor in training has to place an injection in the area of the cervical spine for the first time. He is insecure and wants to train this procedure to test his skills and to do the real injection with self-confidence. But there is no expert and no cadaver available at the moment. Since he wants to start the training directly, he decides to train the injection virtually …

Conceptual Scenario: He starts with the survey of the patient data and anamnesis. After that, he decides for an injection as therapy and starts the training of the virtual placement of the needle [Concrete Scenario 1] based on the MRI data and the 3D model of the patient’s anatomy …

Concrete Scenario 1: (Details of injection planning): With the mouse (left mouse click) he defines one marker for the penetration point and one for the target point of the needle in the 2D data. The needle takes up its position. In an animation the user can view the injection process of the needle to his defined position …

Experiences with the use of Scenarios In total, six scenarios related to cases with different levels of difficultly and different viable treatment options have been explored in this example. The discussion of such scenarios with medical doctors lead to many ideas for the exploration of the data, in particular when the decision between two alternative therapies depends on subtle details of the patient anatomy. As a consequence, it was discussed how such transitions in surgical decisions should be reflected in training systems. We already mentioned the importance of decisions. Here, we learned specific examples of difficult decisions and how they are taken. Slight variations in the angle between spinal disks determine whether an access from the back is possible or whether a more complex intervention from the frontal side is required to access the pathology.

In another project it turned out in the discussion of scenarios for surgical planning that the envisioned tool is also relevant for patient consulting, where surgical options are explained to the patient and to family members (see Fig. 5.7). For this purpose, a large display device is useful and the set of available features may be strongly reduced.

What is the ability determining logical sequence in a problem and solving it accordingly?

Figure 5.7. The use case of patient consult was identified in a discussion of user stories for a virtual endoscopy system (planning of an endoscopic intervention in the nasal region). A large screen with 40 inch diagonal (for the patient) connected with a conventional notebook where the doctor modifies the view is an appropriate configuration.

(Courtesy of Gero Straus¨s, University Hospital Leipzig)

Combining Scenarios and Visual Components A drawback of a pure scenario-based design is that it is restricted to textual components. Scenario descriptions may be enriched with sketches, screenshots, and digital photos from important artifacts. Implants, phantom data of an anatomical region, surgical instruments or relevant objects from the desk of a medical doctor may be among these artifacts. In particular for envisioning future usage, visual components, such as sketches, screenshots, video sequences, storyboards or even cartoons are essential. To further strengthen the imagination of medical doctors and to support the reflection on the user stories, the strongly visual components of diagnostics and treatment planning systems need to be incorporated into early design stages. The set of previously described scenarios, for example, was linked with Figure 5.6.

Guidelines for Scenario Authors Another essential lesson is that scenario authors need support to create meaningful and coherent descriptions. Thus, the experiences with scenarios in a certain type of application, should be analyzed with the goal to create guidelines for scenario authors. These guidelines should provide guidance on the content, structure, and use of the different types of scenarios. Also hints for the use of visual components should be provided.

Combination of Scenarios and Workflows Workflows and scenarios provide useful and complementary information to guide the development process. While scenarios better support the discussion between user researchers and target users, they do not inform the actual developers in a concise manner. For the developers, a validated workflow description is a valuable support, in particular for implementing wizard-like systems which guide the user in a step-by-step manner. The systems developed at the University of Magdeburg were also based on workflow descriptions at different granularities.

Surgical planning, for example, at the highest level often follows the workflow:

anamnesis and diagnosis,

assessment of the general operability (Can the patient tolerate anesthesia?, Will the patient recover from a major surgery?, …),

resectability (Is the pathology accessible and may be removed without damage of vital structures?),

access planning,

in-depth planning including vascular reconstructions.

Requirements The final result of the analysis stage is usually a set of requirements, which should be

concise,

precise enough to verify whether they are fulfilled, and

consistent with each other.

Requirements should be associated with a priority which reflects whether a certain aspect is a “must have” or a rather optional aspect, which might enlighten users but will not be sorely missed if unavailable. Obviously, requirements need to be discussed, validated and updated [Pohl and Rupp, 2009]. To make matters worse, in any non-trivial situation, requirements change over time based on first experiences with prototypes or new technologies. It is reasonable to enable a certain flexibility and thus to integrate a history mechanism in a requirements document which allows to keep track on any changes. For legal product development, such a mechanism is even mandatory due to legal admission procedures.

Managing Task Analysis Results Task analysis results in many different data, such as workflows, scenarios and requirements. These data should be carefully managed in a database or data warehouse to be used for longer projects and also for a set of related projects. The substantial effort of a task analysis is only justified if the results are used in an intensive manner and over a longer period of time. As a rule of thumb, task analysis is valid for a period of about 5 years.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124158733000055

What is the ability determining logical sequence in a problem and then solving it accordingly?

Inductive reasoning is the ability to identify a logical sequence in a problem and then solve the problem.

What is a logical sequence of steps prepared for solving problem?

An algorithm is a series of well-defined steps which gives a procedure for solving a type of problem.

What is the logical sequence?

A logical sequence is a set of numbers, words, objects etc, following in a sequence with some sort of relation between two consecutive sets. Sometimes, it is also called a progression.

What is logical reasoning and problem solving?

What is Logical Reasoning and Problem Solving? Logical Reasoning and Problem Solving seeks to test students' ability to process information and data, reach logical conclusions and solve problems.