Jean Walrand, Pravin Varaiya, in
High-Performance Communication Networks (Second Edition), 2000 We briefly describe the functional
components, the atomic actions on network resources, in five types of network operations. The functional components for control of processing are of two types: to provide instructions when the SSP asks the SCP to take control of the call processing and to effect the release of control when the SCP returns the control to the SSP after servicing the request. A connection request involves the following functional components: creating a leg between an SSP and another network element, joining a leg to an ongoing call, splitting a leg from an ongoing call, and freeing a leg to release the resource. Two
types of functional components are invoked in user interactions: sending information such as a prerecorded announcement to a call participant and receiving specific information such as dialed digits from a call participant. Network resource status requests are used by the SLP in processing some call control.
Monitoring is a functional component that instructs the SSP for notification of a particular event on a specified leg, such as on-hook, flash-hook, and off-hook. Network information revision requests enable the SLP to change the data stored in the SSP tables. In summary, INA is a culmination of a long
development in which network element functions or operations are separated from the control of those functions. This separation permits the creation of new services as programmable sequences of functional components. Sophisticated customers can program these sequences by themselves. A very important example is 800 number services. Companies, such as credit card and direct order companies that provide direct customer services over the 800 number phone, can route customer calls to
different parts of the country or to different operators depending on the time of day, the subscriber's location, and other information provided by the subscriber via the telephone keypad. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780080508030500101 Formulating the Functional ArchitectureRichard F. Schmidt, in Software Engineering, 2013 10.2.1 Functional componentA functional component represents a complex task the software product must perform. A functional component is activated when control is transferred to the component for execution. Every function transforms one or more data items, in the form of an input or global or local variables, into an output data item or processed variable. Functional complexity is apparent when any of the following conditions exists: ●A function involves several data transformation actions and at least one action has no clear, uncomplicated solution. ●A function involves distinguishable conditional responses. ●A function involves multiple interfaces with other functions or external systems, users, or other software applications, such as databases. Functional complexity compels the solution to be further decomposed into less complex functional components. Decomposition requires that a functional component be broken down into two or more subfunctions. The designation of a function as a component indicates that it involves a lower level of functional detail to unambiguously express the manner by which the data processing or transformation is performed. Several layers of decomposition may be necessary to establish a noncomplex solution. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780124077683000100 Genesis of SDNPaul Göransson, ... Timothy Culver, in Software Defined Networks (Second Edition), 2017 3.2.5 ForCES: Separation of Forwarding and Control PlanesThe Forwarding and Control Element Separation(ForCES) [19] work produced in the IETF began around 2003. ForCES was one of the original proposals recommending the decoupling of forwarding and control planes as well as a standard interface for communication between them. The general idea of ForCES was to provide simple hardware-based forwarding entities at the foundation of a network device, and software-based control elements above. These simple hardware forwarders were constructed using cell switching or tag switching technology. The software-based control had responsibility for the broader tasks often involving coordination between multiple network devices (e.g., Border Gateway Protocol routing updates). The functional components of ForCES are as follows: •Forwarding Element: The Forwarding Element(FE) would be typically implemented in hardware and located in the network. The FE is responsible for enforcement of the forwarding and filtering rules that it receives from the controller. •Control Element: The Control Element(CE) is concerned with the coordination between the individual devices in the network, and for communication of forwarding and routing information to the FEs below. Network Element: The Network Element(NE) is the actual network device which consists of one or more FEs and one or more CEs. •ForCES Protocol: The ForCES protocol is used to communicate information back and forth between FEs and CEs. ForCES proposes the separation of the forwarding plane from the control plane, and it suggests two different embodiments of this architecture. In one of these embodiments, both the forwarding and control elements are located within the networking device. The other embodiment speculates that it would be possible to actually move the control element(s) off of the device and to locate them on an entirely different system. Although the suggestion of a separate controller thus exists in ForCES, the emphasis is on the communication between CE and FE over a switch backplane, as shown in Fig. 3.4. Fig. 3.4. ForCES design. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B978012804555800003X Cognitive Radio ArchitectureJoseph MitolaIII, in Cognitive Radio Technology (Second Edition), 2009 14.2.2 Design Rules Include Functional Component InterfacesThe six functional components (see Table 14.2(a) and (b)) imply associated functional interfaces. In architecture, design rules may constrain the quantities and types of components as well as the interfaces among those components. This section addresses the interfaces among the functional components. Table 14.2(a). CR N-Squared Diagram
Note: This matrix characterizes internal interfaces between functional processes. Interface notes 1–36 are explained in Table 14.2(b). P = primary; A = afferent; E = efferent; C = control; M = multimedia; D = data; S = secondary; others not designated P or S are ancillary. aInformation services API. bCAPI.Table 14.2(b). Explanations of Interface Notes for Functional Processes Shown in Table 14.2(a)
The CR N-squared diagram of Table 14.2(a) characterizes CR interfaces. These constitute an initial set of CR APIs, augmenting the established SDR APIs. This enables basic CRs to accommodate the dynamic spectrum behavior of the Defense Advanced Research Projects Agency (DARPA) NeXt-Generation (XG), and Wireless Network after Next (WNaN) radio communication programs. In other ways, these APIs supersede the existing SDR APIs. In particular, the SDR user interface (GUI) becomes the user sensory and effector API. User sensory APIs include acoustics, voice, and video, and the effector APIs include speech synthesis to give the CR <Self/> its own voice. In addition, wireless applications are growing rapidly. Voice and short-message service provide an ability to exchange images and video clips with semantic tags among wireless users. The distinctions between cell phone, personal digital assistant (PDA), and game box continue to disappear. These interface changes enable the CR to sense the situation, to interact with the user, and to access radio networks on behalf of the user according to its situational assessment. This matrix characterizes internal interfaces between functional processes. Interface notes 1 to 36 are explained in Table 14.2(b). The preceding information flows, aggregated into an initial set of CR APIs, define an information services API (ISAPI) by which an information service accesses the other five components (interfaces 13–18, 21, 27, and 33 in Table 14.2(a)). They would also define a CAPI by which the cognition system obtains status and exerts control over the rest of the system (interfaces 5, 11, 17, 23, 25–30, and 35 in Table 14.2(a)). Although the constituent interfaces of these APIs are suggested in this table, it would be premature to define these APIs without first developing detailed information flows and interdependencies. We will define and analyze these APIs in this chapter. It would also be premature to develop such APIs without a clear idea of the kinds of RF and user domain knowledge and performance expected of the CR architecture over time. These aspects are developed in the balance of this chapter, enabling one to draw some conclusions about these APIs in the final part of this chapter. A fully defined set of interfaces and APIs would be circumscribed in RXML. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B978012374535400014X Protecting Virtual InfrastructureEdward G. Amoroso, in Computer and Information Security Handbook (Third Edition), 2017 3 Hypervisor SecurityThe most important functional component in any virtual infrastructure is the hypervisor. Ensuring that the underlying hypervisor is sufficiently secure is an important first step toward overall virtualization security. The US National Institute for Standards and Technology (NIST) published a guide for securing the hypervisor that serves as a useful reference on 22 best practices in this area [1]. Some of the more important techniques recommended in the NIST guide include: •Hypervisor configuration: As with traditional OSs, hypervisors can be configured properly or improperly. Example hypervisor misconfiguration problems include rogue virtual machines gaining too much access to underlying hardware resources. •Hypervisor patch and vulnerability management: As with any software, hypervisors are likely to become subject to required patches and vulnerability updates, so hypervisor administrators must put commensurate processes in place. •Privileged operation execution management: Because hypervisors sit between guest OSs and the underlying hardware, privileged operations must be managed carefully during execution. Enterprise IT and security staff should not be surprised by these types of recommendations for securing hypervisors, because they closely match the types of OS security recommendations that have been around for years. As a general rule, if a heuristic, tool or procedure is in place to protect an OS, something comparable has probably been proposed to protect the hypervisor (see checklist: “An Agenda for Action for Implementing Security Recommendations for the Hypervisor”) [1]. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780128038437000673 Secure Development Life CycleZhendong Ma, ... Paul Murdock, in Smart Grid Security, 2015 8.1 IntroductionSince the Smart Grid is basically defined as an addition to the existing power grid infrastructure with an extended information and communication technology layer, there will be virtually no Smart Grid component that does not include software. In order to gain an overview of where security lifecycle assessments can be most complex, it makes sense to distinguish components according to their functionality, rather than their exact position in the overall technical system. Taking this perspective, the Smart Grid is an example of a classical automation system with a field layer (sensors and actuators), an automation layer (communication systems and controllers) and a management layer (centralised systems). One of the ongoing discussions in this context is how much computation actually will take place in a distributed form in substations, customer gateways, etc. and how much of it will be centrally located in different data centres of different stakeholders (distribution grid operator, aggregator, electric mobility provider, etc.). In different countries, there might be different answers to this question. The following discussion takes a Central European view. A comprehensive overview of functional components in a Smart Grid can be found in (SGCG, 2012). In order to gain insight into the specific tasks accomplished by software-implemented functionality, only three prominent examples shall be discussed in the following, rather than covering all potential components in a Smart Grid. These examples are selected from the power distribution domain and focus on the components in which the introduction of Smart Grid concepts leads to significant changes and extensions of the components from the pre-Smart Grid era. Example 1: The Customer Gateway The question of how to interface the energy end users with the management and coordination systems in a Smart Grid is one of the central discussion points in the Smart Grid community, and it can be said that this question is not yet fully solved. Certain appliances, such as heating systems, photovoltaic inverters or charging stations for electric cars will have to be managed in the future and require IT interfaces for this purpose. These can be either be realised on a per-appliance basis, with individual solutions for heat pumps, inverters etc. Alternatively, a central Smart Grid interface for all appliances at a customer’s site can be instantiated that handles all Smart Grid-related coordination. From a security perspective, the latter solution may be preferred because a single interface offers a smaller attack surface, which can be secured in a much simpler manner than a large variety of different interfaces with slightly different purposes. The German Bundesamt für Sicherheit in der Informationstechnik (BSI) has coordinated a large exercise to define the security measures that are required for such a central interface (BSI, 2012). This interface is a software-heavy component, comparable to a firewall, managing security and privacy for the management of grid-relevant generators and loads on the customer’s site. This can be a private household, but also a larger site of a small enterprise or even an industrial installation. The main tasks of this interface are billing, generation shedding in case of grid congestion and management of load and generation flexibility in combination with aggregators or virtual power plants. Example 2: Secondary substation automation Another point in the system where changes are taking place is the secondary substation, which is the last transformer station down the line feeding the low voltage network in which most end customers are connected. It can be seen as a counterpart on the power grid side to the customer interface discussed above. While these secondary substations in the past were mostly mere passive installations with a transformer, fuses as well as hand-operated breakers and re-connectors, IT equipment is finding its way into these substations. This is primarily motivated by the ongoing smart metering rollout. The majority of European smart meter installations use power line communication for the last mile from secondary substation to the customer, which means that a power line communication endpoint (called the data concentrator) has to be installed in most secondary substations. For collecting metering data from these concentrators, technologies like direct RF links, GPRS or fibre optics in urban areas are used. This results in a large number of secondary substations becoming “online”. Many grid operators have taken this opportunity to add substation automation equipment to secondary substations for monitoring and remote control purposes, since this comes with marginal additional costs when combined with the data concentrator installation. Functionalities realised here include meter data collection, monitoring and local grid control systems (e.g. for optimal tap position of the transformer), access control and others. This includes a number of communication stacks such as DLMS-COSEM for metering, IEC 60870-5-104 or even IEC 61850 for automation, Modbus for local sensors and actuators. See Chapter 5 for a description of these communication protocols. Again, these functions are mostly implemented in software. Example 3: Distribution Management Systems Distribution Management Systems (essentially SCADA systems for distribution power grids) are not new and existed before the advent of the Smart Grid concept. Distribution grids were originally designed for supplying loads rather than carrying away power generated from distributed generators. With the significant rise of generation capacities in power distribution systems, the functionality required from the management systems has changed. Today’s Distribution Management Systems typically contain online details of the medium voltage level. A central functionality is to depict the system status on form of typically large visualisations (screens, projections) and to allow operators to interact with all the active components in the system (switches, on-load-tap-changer transformers, compensation circuits, etc.) The low voltage level is usually not included here, because there is no remote monitoring and control system in place and the level of detail required to depict these systems could not be managed by the low number of operators being in charge of system operation. The rising number of generators in the low voltage network (mainly photovoltaics) has resulted in two different trends on how to handle this situation in distribution management: the first is a straight-forward extension of existing Distribution SCADA systems to parts of the low voltage systems, usually combined with an advanced alarming solution that allows to draw the attention of the operators to the low voltage only in case of special events. The second trend is to develop low voltage management systems out of the geographical information systems (GIS) that most grid operators maintain for their complete distribution systems. The state of manually operated components such as switches can be, e.g. reported by field operators using an on-line version of the software on a handheld device. 8.1.1 The Development of Software for the Smart GridCurrent trends show that a large amount of resource is and will continue to be invested in software development for the Smart Grid. According to Groom Energy (Energy, 2013), there are over 300 companies that are active in the area of Smart Grid software solutions in the market, offering a broad range of products for energy mangers and operators to monitor and optimising energy consumptions based on business rules, intelligence, and user behaviour. The Smart Grid software vendor landscape includes companies for building management systems, utility bill payment, carbon management, energy management, demand response, industrial control, and sub-meters. The nature of the companies developing Smart Grid software include engineering companies, computer and enterprise software companies, network and communication equipment companies, and companies specialised in embed systems. Depending on the organization, different system development lifecycle methodologies can be used, for example, waterfall, V-model, Rapid Application Development (RAD), prototype, and the spiral model. The waterfall model is probably the most common development lifecycle method. It is a linear and sequential process, including requirements, design, implementation, verification, and a maintenance phase. Equally popular is the V-model, which extends the waterfall model by associating each of the development phases with verification and validation. The RAD model is an alternative to the waterfall model, which aims at reducing effort for planning, and emphasizes development that results in using more prototypes instead of design specifications. The prototype model focuses on creating prototypes, which involve steps to identify basic requirements and to iteratively develop, review, and revise prototypes. The spiral model is a combination of the waterfall and prototype model, which uses a risk-driven process to guide multiple parties in large and complex development projects. It involves a cyclic approach for defining requirements and incrementally developing and refining prototypes. It can be seen from this discussion that software is playing an increasingly important role in Smart Grid. Consequently, secure development practices, which are integrated into existing development lifecycles, are mandatory. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780128021224000080 HASARD
Hong Zhu, ... Yanlong Zhang, in Relating System Quality and Software Architecture, 2014 5.5.2 The object systemThe object of the case study is an e-commerce system for online trading of medicine. The system is operated by the Medicine Trading Regulation Authority of the Hunan Province, P. R. China, to supply medicines to all state-owned hospitals in the province. Its main functions include (a) customer relationship management, (b) product catalogue management, (c) online trade management, (d) online auction of medicine supply bids, (e) order tracking and management, (f) advertisement release, and (g) a search engine for medicine information. The system was implemented in the J2EE technology. The system includes the following functional components. •Management component: Supports the management activities, including the management of information release, trading centers, users' membership, manufacture membership, permission of trade and/or production of medicine, and log information of online activities. •Content management: Manages information contents stored, processed, and displayed by the system, such as medicine catalogues, prices, geographical information, and sales information. •Online trading: Provides an interface and facilities for online trading activities and the links to other information contents such as catalogue, product information, and contract templates. •Public relationship: Maintains the public relationship between the organization and its various types of customers, including sending out invitations to the public to bid on auctions, and so on. •Order tracking: Provides the interface and facilities to track the business process of each deal. •Communication management: Provides the secure communications facilities for sending messages and manages the mails sent and received by the system. •Report generation: Answers queries from managers about various statistical data of the online trading and generates financial reports. The case study was conducted after the object system was released and in operation for more than 1 year. However, the problems in the operation of the system were not revealed to the analysts involved in the case study before the predictions of the system's problems were made. This enables us to see how well the result of quality analysis matches the reality. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780124170094000053 Detection SystemsClifton L. Smith, David J. Brooks, in Security Science, 2013 Types of DetectorsThe principle of detection relies on sensing technology to discover the presence of a person or object within its field of view. That is, if the purpose of security technology is to detect the presence or activities of people, then the detection methods must be devised to respond to these stimuli. Thus, the detection of the presence or activities of people will require the development of appropriate sensors, and is currently a major applied scientific endeavor for the protection of assets. A schematic approach to the functional components of security detection for the detection of the presence or activities of people requires the following: •A signal must be produced by the person or the actions of the person to be sensed by the detector. The signal could be in the form of reflected light (detected by a camera), near-infrared radiation through body heat (detected by a passive infrared [PIR] detector), a sound (detected by a microphone), by movement when touching a fence (detected by a microphonic cable embedded in the fence), or from a molecular vapor from a package of drugs (detected by specific molecular sensors). All of these examples describe a signal for detection. •The function of a sensor in security detection system responds to a signal for which it is compatible. That is, the sensor is capable of detecting the source that produced the signal, complying with the application of the detector in a DiD strategy. There is a wide range of sensors in security technology systems, including break-glass detectors that are microphones tuned to the frequencies of breaking glass, to X-ray detectors for the presence of explosives. Some other examples of sensors include charge-coupled device (CCD) chips in cameras to detect low levels of light, and the disturbances in magnetic fields produced by the presence of ferromagnetic metals. •Usually, a low-amplitude signal is received by the sensor in a detector, and so it is necessary to increase the level of signal through an amplifier. The signal-to-noise ratio (SNR) is increased by the amplifier to detect a change in the signal strength from the presence of a person. The effect of the amplifier is to increase the sensitivity of the detection function so that it may detect subtle changes in intrusion within the system's field of view. Depending on the type and style of sensors used in the security technology, the amplifier will possess functions to increase the signal strength. Typically, fiber-optic cable could use laser amplification and opto-electronic solid-state amplifiers can be applied to light intensifiers. •The function of the analyzer is to decide if a signal has been detected, or if the only noise has been received. Even after amplification, some signals are still weak and need to be discriminated against background noise. Discriminant analysis is often included in the circuitry to determine if the immediate signal shows that a change has occurred. That is, if a small change in signal quality can be detected, then this effect will indicate the presence of an intruder. Discriminant analyzers incorporate intelligence into the logic circuits of detection systems to better differentiate between active signals and background noise. Thus, these “smart” detection systems are able to discern signals against predetermined criteria to accept or reject the detection signal. •The function of an alarm in a detection system is to indicate that an anomalous signal has been detected. The signal may have been generated by the presence of an unauthorized person or action, and it indicates that a response is required to investigate the anomalous incident. However, the issue of unwanted alarms, where spurious signals are generated by sources other than actual unwanted intruders or actions, requires the authenticity of the alarm condition. It is necessary to have an understanding of the reliability (false alarms through instability of a device) and validity (unwanted alarms through environmental sources) of the detection system to achieve optimum effectiveness for the protection of assets. The discrimination between an actual attack on the detection system and a spurious signal from the surroundings will determine the validation level of the system. The incorporation of intelligence into the discrimination function of a detection system will reduce the frequency of unwanted alarms. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780123944368000060 The IEEE 802.16m Convergence Sub-LayerSassan Ahmadi, in Mobile WiMAX, 2011 Publisher SummaryThis chapter provides a description of the functional components and protocols associated with the IEEE 802.16m service-specific Convergence Sublayer (CS). The convergence sublayer is located on top of the IEEE 802.16 MAC sublayer and interfaces the MAC sublayer with the network layer protocols. The convergence sublayers of the IEEE 802.16m and IEEE 802.16-2009 standard have very similar behavior; the only differences are in the assignment and use of connection identifiers in the two standards, as well as exclusion of some unused legacy protocols. The Internet Protocol CS (IPCS) and Generic Packet CS (GPCS) are two types of the service-specific CS that are supported by IEEE 802.16m, which are used to transport packet data over the air interface. When using GPCS, the classification is performed in protocol layers above the CS, and the relevant information for performing classification is transparently provided during connection set-up or change. The Asynchronous Transfer Mode CS (ATM CS) and Ethernet CS variants that were specified in the IEEE 802.16-2009 standard are no longer supported in IEEE 802.16m due to a lack of industry interest. Other air interface standards such as 3GPP LTE also use such logical interfaces between their Layer 2 service access points and the network layer protocols. The Packet Data Convergence Protocol (PDCP) in 3GPP LTE performs ciphering and encryption of the MAC PDUs. This is an important difference between the MAC functions of IEEE 802.16 and 3GPP LTE. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780123749642100050 Cloud Computing Infrastructure for Data Intensive ApplicationsYuri Demchenko, ... Charles Loomis, in Big Data Analytics for Sensor-Network Collected Intelligence, 2017 AbstractThis chapter describes the general architecture and functional components of the cloud-based big data infrastructure (BDI). The chapter starts with the analysis of emerging Big Data and data intensive technologies and provides the general definition of the Big Data Architecture Framework (BDAF) that includes the following components: Big Data definition, Data Management including data lifecycle and data structures, generically cloud based BDI, Data Analytics technologies and platforms, and Big Data security, compliance, and privacy. The chapter refers to NIST Big Data Reference Architecture (BDRA) and summarizes general requirements to Big Data systems described in NIST documents. The proposed BDI and its cloud-based components are defined in accordance with the NIST BDRA and BDAF. This chapter provides detailed analysis of the two bioinformatics use cases as typical example of the Big Data applications that have being developed by the authors in the framework of the CYCLONE project. The effective use of cloud for bioinformatics applications requires maximum automation of the applications deployment and management that may include resources from multiple clouds and providers. The proposed CYCLONE platform for multicloud multiprovider applications deployment and management is based on the SlipStream cloud automation platform and includes all necessary components to build and operate complex scientific applications. The chapter discusses existing platforms for cloud powered applications development and deployment automation, in particularly referring to the SlipStream cloud automation platform, which allows multicloud applications deployment and management. The chapter also includes a short overview of the existing Big Data platforms and services provided by the major cloud services providers which can be used for fast deployment of customer Big Data applications using the benefits of cloud technologies and global cloud infrastructure. Read full chapter URL: https://www.sciencedirect.com/science/article/pii/B9780128093931000027 What are the four primary functional components of a software application?The four general functions of any application are (1) data storage - storage of the system's data; (2) data access logic - providing access to the system's data; (3) application logic - the system's processing functions; and (4) presentation logic - the appearance of the system to the user and the method used to give ...
What are the three primary hardware components of a system?Computer systems consist of three components as shown in below image: Central Processing Unit, Input devices and Output devices. Input devices provide data input to processor, which processes data and generates useful information that's displayed to the user through output devices. This is stored in computer's memory.
Why is it useful to define the non functional requirements in more detail even if the technical environment requirements dictate the specific architecture?If the technical environment requirements dictate the architecture design, it is still important to define the other nonfunctional requirements in detail. This is because these requirements will become important in later stages of the design and implementation phases of the project.
When using a clientA server host runs one or more server programs, which share their resources with clients. A client usually does not share any of its resources, but it requests content or service from a server. Clients, therefore, initiate communication sessions with servers, which await incoming requests.
|