Which of the following is a device placed between an external untrusted network and an internal trusted network to serve as the sole target for attacks?

Protection in Untrusted Environments

In Virtualization for Security, 2009

Publisher Summary

This chapter discusses the protection measures applied by companies in untrusted environment. Security has high hopes for virtualization as it gives researchers an unprecedented view into the behavior of unknown software applications. Virtual machines have been used for quite some time among top anti-malware companies. A recent surge of reported vulnerabilities and the emergence of commercial anti-VM libraries have pushed for these companies to change their posture and begin adapting to a world where virtualization is a highly valuable tool but is also an untrusted environment. In the enterprise virtualization is improving security procedures by allowing purpose-built appliances to be built and deployed in untrusted environments. It redefines how enterprises think about their software risk exposure and how to best manage their business critical software assets. Separation between critical and risky has been long a conundrum of personal computing. The disposable nature of virtual machine images is about to change where and how we use software applications.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978159749305500013X

Intrusion Response Systems: A Survey

Bingrui Foo, ... Eugene H. Spafford, in Information Assurance, 2008

Contributions and Further Work

Cachin [15] presents specific techniques for distributing trust in an untrusted environment. This work is significant in that a clear approach is presented that allows a system designer or administrator to easily incorporate this architecture into a network with no existing use of diverse replicas and obtain an improvement in the survivability of the system.

Extensions to the scheme are discussed, such as using proactive recovery, dynamic grouping of servers, hybrid failure structures that distinguish between natural and malicious failures, and optimistic protocols that adapt their speeds depending on the presence of adversaries (due to the significant overhead of the atomic broadcast protocols).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012373566950015X

Separation

Edward G. Amoroso, in Cyber Attacks, 2011

Physical Separation

One separation technique that is seemingly obvious, but amazingly underrepresented in the computer security literature, is the physical isolation of one network from another. On the surface, one would expect that nothing could be simpler for separating one network from any untrusted environment than just unplugging all external connections. The process is known as air gapping, and it has the great advantage of not requiring any special equipment, software, or systems. It can be done to separate enterprise networks from the Internet or components of an enterprise network from each other.

Air gapping allows for physical separation of the network from untrusted environments.

The problem with physical separation as a security technique is that as complexity increases in some system or network to be isolated, so does the likelihood that some unknown or unauthorized external connection will arise. For example, a small company with a modest local area network can generally enjoy high confidence that external connections to the Internet are well known and properly protected. As the company grows, however, and establishes branch offices with diverse equipment, people, and needs, the likelihood that some generally unrecognized external connectivity will arise is high. Physical separation of network thus becomes more difficult.

As a company grows, physical separation as a protection feature becomes increasingly complex.

So how does one go about creating a truly air-gapped network? The answer lies in the following basic principles:

Clear policy—If a network is to be physically isolated, then clear policy must be established around what is and what is not considered an acceptable network connection. Organizations would thus need to establish policy checks as part of the network connection provision process.

Boundary scanning—Isolated networks, by definition, must have some sort of identifiable boundary. Although this can certainly be complicated by firewalls embedded in the isolated network, a program of boundary scanning will help to identify leaks.

Violation consequences—If violations occur, clear consequences should be established. Government networks in the U.S. military and intelligence communities, such as SIPRNet and Intelink, are protected by laws governing how individuals must use these classified networks. The consequences of violation are not pleasant.

Reasonable alternatives—Leaks generally occur in an isolated network because someone needs to establish some sort of communication with an external environment. If a network connection is not a reasonable means to achieve this goal, then the organization must provide or support a reasonable work-around alternative.

Perhaps the biggest threat to physical network isolation involves dual-homing a system to both an enterprise network and some external network such as the Internet. Such dual-homing can easily arise where an end user utilizes the same system to access both the isolated network and the Internet. As laptops have begun to include native 3G wireless access, this likelihood of dual-homing increases. Regardless of the method, if any sort of connectivity is enabled simultaneously to both systems, then the end user creates an inadvertent bridge (see Figure 3.8).

Which of the following is a device placed between an external untrusted network and an internal trusted network to serve as the sole target for attacks?

Figure 3.8. Bridging an isolated network via a dual-homing user.

Dual-homing creates another area of vulnerability for enterprise networks.

It is worth mentioning that the bridge referenced above does not necessarily have to be established simultaneously. If a system connects to one network and is infected with some sort of malware, then this can be spread to another network upon subsequent connectivity. For this reason, laptops and other mobile computing devices need to include some sort of native protection to minimize this problem. Unfortunately, the current state of the art for preventing malware downloads is poor.

A familiar technique for avoiding bridges between networks involves imposing strict policy on end-user devices that can be used to access an isolated system. This might involve preventing certain laptops, PCs, and mobile devices from being connected to the Internet; instead, they would exist solely for isolated network usage. This certainly reduces risk, but is an expensive and cumbersome alternative. The advice here is that for critical systems, especially those involving safety and life-critical applications, if such segregation is feasible then it is probably worth the additional expense. In any event, additional research in multimode systems that ensure avoidance of dual-homing between networks is imperative and recommended for national infrastructure protection.

Imposing strict policies regarding connection of laptops, PCs, and mobile devices to a network is both cumbersome and expensive but necessary.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123849175000032

Restoring Trust and Business Services After a Breach

Kevvie Fowler, ... Paul Hanley, in Data Breach Preparation and Response, 2016

On Hosts

Your organization should have master images of servers/devices, per type, frequently referred to as “Gold Standard Images.” For example, a webserver image, a corporate desktop image, etc. These images should be trusted and have the hardening of software and files for each type of system within the untrusted environment. Using this gold standard and hashing each file with SHA-2 or another secure hashing algorithm will allow you to generate a standard for what “known-good” looks like. You can use this standard and compare all other devices of that class, against, to identify anomalies.

Checking Identity and Access Management systems or equivalent (Windows Active Directory, etc.) for large volumes or patterns of failed login attempts followed abruptly by a successful login and newly created user accounts or existing accounts that have had privileges augmented can also identify HoI's and activity that can be incorporated into your timeline which will likely be dynamically updating with events and activity as you go through recovery.

When a HoI is found, it should be imaged and analyzed to discount or confirm if it indeed has been compromised. When you identify suspicious activity that appears to be associated with the incident, look at that activity in relation to a timeline of events to determine what other activity occurred on the system or network on or around the same time. This should include host activity, network activity, email, or social media communication and the access of registry and database object timestamps. At this point you should have an indicator of systems, common files, registry entries, or communication patterns that have been deemed as malicious. These can serve as the details needed to develop an IoC or IoA that can be helpful in identifying other HoIs compromised using the same methods.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128034514000083

Protecting Web Browsers

Sean-Philip Oriyano, Robert Shimonski, in Client-Side Attacks and Defense, 2012

Web Browsers as a Target

Client-side attacks are commonly carried out between a web browser and a web server. This is because it is one of the easiest avenues of attack as mentioned in the first two chapters. Firewalls and content filters let HTTP in (scripts) and the web browsers are easily exploitable via the web servers they connect to. This is why so much attention was given to web browsers in this chapter. Web browsers are indeed a target, so take care in protecting them as much as possible by not only patching them and keeping them secure, but understanding them as much as possible and educating yourself and end users on their use.

As we can see in Figure 3.18, by simply visiting a web page on a server that is malicious in nature, running an unsuspecting script or downloading and installing potentially harmful software that appears harmless, your browser can be overwritten immediately.

Which of the following is a device placed between an external untrusted network and an internal trusted network to serve as the sole target for attacks?

Figure 3.18. A Basic Browser Attack

What’s worse is, if you used Google.com as your homepage for a search engine, how would you really be able to tell the difference if you were “hacked?” In this example, we can see that the browser’s default page is overwritten to drive your traffic to a different location and the page looks nearly identical which is not only misleading but often overlooked by most users.

As we have seen with cross-site scripting in Chapter 2 and some of the vulnerabilities seen here in Chapter 3 for each browser, the web browser is an attractive and tempting target for attackers. This is largely due to the well-known and numerous vulnerabilities present in each browser and the large number of browsers present on different operating systems and platforms and its ability to be openly used between the security systems meant to protect it such as firewalls and content filters.

Web browsers have proven themselves to be both a very beneficial piece of software and at the same time a liability due to the environment it works in. Consider the environment a web browser accesses and how this interaction impacts security on the client. When in use a web browser actually opens a “portal” from a secure environment on the client to an insecure environment such as the Internet. This access essentially means that the insecure and untrusted environment of the Internet is being brought into the environment on the client along with all the potential vulnerabilities associated with this.

Note

When a web browser access content from the Internet and runs it locally a potential security risk is created by the running of untrusted code in a trusted environment. In order to mitigate the risks with such an arrangement one of the techniques we discussed is sandboxing, which is used to prevent untrusted code from gaining unrestricted access to the local system.

Selecting a Safe Web Browser

So which browser is the biggest target? Well that is an argument waiting to happen, but most security professionals and research organizations would tend to agree that Internet Explorer holds this distinction. Internet Explorer lends itself to being the biggest target simply because it has the enviable (or unenviable) position of having itself installed on greatest large amount of desktops around the world. By some estimates the Windows operating system is installed on the majority of desktops worldwide which adds up to a large number of targets that attackers have taken advantage of. Add into this mix the fact that IE is built on the Windows operating system which has been shown time and time again to have numerous defects that lend themselves to client-side attacks in the form of buffer overflows, cross-site scripting, remote code exploits, and many others.

Of course Internet Explorer is not the only browser on the market as we have seen in this chapter, so what about the others? Firefox has rapidly become a target mainly due to its increasing presence on the desktop by users looking for alternatives to other browsers as well as those looking to try a new browser anyway. So if Firefox is a target how vulnerable is it? The reality is that Firefox itself has a number of vulnerabilities present in it and at several times during its inception it has actually had more defects and other security flaws than IE. Looking objectively at the situation one will note that Firefox is vulnerable to many of the same attacks as IE, but Firefox also tends to have its defects and flaws addressed much quicker due to an aggressive effort by the community.

Warning

One of the bigger misconceptions that has been perpetuated on the Internet is the idea that Firefox is the most secure browser around or as some would claim “bulletproof.” One should never become so fanatical about a browser that they believe that it is invulnerable to attacks; this type of thinking can be dangerous. As a security professional you should always strive to be objective and look at the information being presented.

The next browser that is a target is Google Chrome which is the “new kid” on the block. This browser has only been around a short time, but in that time it has taken its share of flak on several fronts. The biggest issue with Chrome that surfaced shortly after release was the perceived issue of the browser reporting information back to Google regarding browsing and other habits. This was later disavowed by Google and later clarified as only being done with the end users consent and even then only specific information is sent. So is Chrome a target? Yes, but maybe not as big of one as IE and Firefox. What may contribute to Chrome being a larger target is the fact that it borrows code from so many different libraries including Firefox and others meaning it may also inherit some of the vulnerabilities of these browsers and libraries as well.

Safari, is another popular browser on the market. Safari, even though it may not have the market share of the other browsers mentioned in this chapter, still is a major target for a few reasons. First, the browser is part of the increasingly popular Mac OS X platform meaning it has become a larger target as the platform becomes more popular. Second, the browser is on other popular devices such as the iPad, iPhone and other Apple devices that mean more targets for an attacker. Because of Apple’s market share growing day by day, Safari will continue to grow in use and become more widespread as time goes on.

Opera is the last browser we covered and just like the others, is equally as vulnerable as the rest. Because of its history in the marketplace and its flexible use among so many operating systems and devices, it’s seen as often as any of the rest of the browsers available.

So in sum, many users use IE because they purchased Windows and it came with it. Many of those same users think that they are safer downloading another browser and take their pick of the many options available. Others install an application or tool (such as an iPhone) and with it inevitably comes a Safari download and install. Regardless, they ALL have vulnerabilities associated with them and as a security professional because there are so many options and so many browsers installed, we need to understand them all in order to prevent being exploited.

Warning

Never forget that the increasing number of Internet enabled devices such as mobile devices and appliances only makes the problem of attacks worse. Think of devices such as the iPad and iPhone that are seeing rapid adoption by the public, with these devices individuals are browsing the Internet from more places and carrying their information with them. With increasing amounts of information being stored on these devices and these devices being Internet enabled attackers have increasingly viewed these devices as targets for attack. Chapter 9 covers mobile devices, security and client-side attacks.

Warning

It should be noted that there are other web browsers out there in the wild. There have been many and new ones are in development all that time. Consider Lynx, Mosaic, Konqueror, K-Meleon, Galeon, OmniWeb, AOL Explorer, Dillo, Dooble, Flock, and so on. Also, between set top cable TV boxes, gaming systems, handhelds, book readers and more, there are web browsers found almost everywhere we interact with the Internet. When using “any” of these browsers, follow the same methodology shown here in this chapter (and this book) to secure against client-side attacks and protect yourself.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9781597495905000031

Blockchain for IoT-based smart cities: Recent advances, requirements, and future challenges

Umer Majeed, ... Choong Seon Hong, in Journal of Network and Computer Applications, 2021

7 Conclusion

Blockchain has emerged as a disruptive technology for secure P2P interaction in an untrusted environment with disintermediation. In this paper, we have explored the role of blockchain in smart cities. We chronologically investigated the genesis of blockchain technology as well as its inception and further enhancements. For this, we discussed the constituent technologies in blockchain technology. We reviewed the prevailing blockchain platforms and consensus algorithms available in the blockchain ecosystem for engaging in smart city applications. We provided the technical diligence of potential applications for blockchain utilization as a discussion. We outlined important factors influencing the selection of a blockchain platform. We critically reviewed the literature utilizing blockchain in prominent smart city applications. We presented real-world case studies that effectively employed blockchain to provide reliable and secure services in smart cities. We discussed the fundamental data-centric requirements for the employment of blockchain in smart cities. We presented the open research challenges preventing blockchain to become a key technology in innovating smart cities.

We conclude that blockchain will be a key technology in the era of a data-driven world. Innovations in blockchain technologies and their implementation in smart cities, to improve the quality of life, is a popular area in contemporary research communities. However, there are still many challenges and requirement constraints to be explored and resolved for employing blockchain in sustainable urban development initiatives. This survey can help researchers to identify and tackle the challenges involved in designing and developing blockchain-based solutions for IoT-based smart cities.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1084804521000345

FPGA-based Physical Unclonable Functions: A comprehensive overview of theory and architectures

N. Nalla Anandakumar, ... Mark Tehranipoor, in Integration, 2021

6.8 PUF-based authentication

PUF-based authentication is almost similar to PUF-based identification, as once again a server creates a database of the CRPs of a PUF device. However, this time the PUF device goes through untrusted environments, where it can be substituted by counterfeits. Therefore, if a client wants to ensure the authenticity of such a PUF device, it can do so by sending a request for authentication of the relevant PUF device to the server. Then, the server responds with a challenge for the PUF device to be identified. If the produced response matched with the relevant response stored on the server’s database, then the device is authenticated, otherwise it is considered a fake device, as shown in Fig. 23. In this case, authentication is achieved without explicitly revealing any of the stored CRPs. Again, however, in the case of successful authentication, a CRP of the authenticated PUF may be completely revealed, and therefore should not be used again. In order to avoid this, a hashing scheme may be employed, so that the CRPs are not transmitted in the clear, or a key can be used for implicit authentication, etc. Finally, error correction may be required again if the PUF responses are noisy.

Furthermore, some practical PUF based authentication systems have been demonstrated on FPGAs [13,19,148]. In [13], authors presented an intrinsically reconfigurable DRAM PUF based on the idea of DRAM refresh pausing and also they demonstrated the use of this PUF in performing device authentication through a secure, low-overhead methodology, on Altera Stratix IV GX FPGA-based Terasic TR4-230 development board. In [19], the authors demonstrated authentication and key exchange protocol on FPGA by combining the concepts of PUF, IBE and Keyed Hash Function. This complete implementation occupies 1733 slices on Xilinx Artix- 7 FPGA of which 456 slices are occupied by the Double Arbiter PUF and 1277 slices are occupied by the BCH error-correcting logic. The authors in [212] proposed PUF-based anonymous authentication scheme on FPGAs for hardware devices and IPs in edge computing environment. Authors in [148] introduced slender PUF protocol based on pattern matching that to authenticate the responses generated from a Arbiter PUF. This PUF protocol implementation requires 652 LUTs and 1400 registers in Xilinx Virtex 5 FPGAs. In [213] authors demonstrated a FPGA implementation of a provably secure protocol that supports privacy-preserving mutual authentication between a server and a constrained device. This PUF protocol implementation requires 3543 LUTs, 1275 registers and 8 block RAMs in Xilinx Virtex 5 FPGAs.

Which of the following is a device placed between an external untrusted network and an internal trusted network to serve as the sole target for attacks?

Fig. 23. PUF-based Authentication.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S0167926021000766

Towards a virtual network function research agenda: A systematic literature review of VNF design considerations

Chuanji Zhang, ... Steven A. Wright, in Journal of Network and Computer Applications, 2019

5.1.8 VNF support for security

Only two papers found using our search criteria discuss security features within VNF in detail, while five others discuss management authentication support.

In (Shih et al., 2016), the security of the state information within the VNF is addressed. If the VNF is running in an untrusted environment, the hypervisor or any other service running in privileged mode can view and/or modify the application state. The paper uses Intel's Software Guard Extensions (SGX) to protect against attackers attempting to steal and/or manipulate the internal state of the VNFs by putting the state information in a secure memory enclave called S-NFV. Isolation is maintained between the OS and the enclave by ensuring that the code and data in the enclave do not rely on any outside memory, and that the enclave has a limited but necessary set of safe APIs.

The authors in (Bronstein and Shraga, 2014) describe the security issues related to virtualizing CPE. The change to vCPE is characterized in terms of location change - from home to NSP network; physical change - from hardware to software; aggregation change - from dedicated functions to potentially shared function; and responsibility change - from individual user to NSP. While the security issues discussed here are specific to virtualizing CPE function, they could be generalized in terms of ensuring correctness and integrity of service chains, isolation of VNFs, and allowing user management and configuration of VNFs.

Only five papers (Nadareishvili et al., 2016; Van den Abeele et al., 2015; Rosa et al., 2015; Davies et al., 2008; Malavalli and Sathappan, 2015) report that the VNF should support management authentication, while none of them are implemented. These papers propose to have an authentication API to allow only the authorized users to access and operate the VNFs. This design consideration is of great importance since it impacts every phase of the VNF life cycle. Management authentication is essential for any real-world deployment. Many papers may not have mentioned this consideration as it may have been implicitly assumed to be provided by the runtime environment such as the VM.

As noted earlier, while the security issues in NFV is a vast research area, support for security within VNF plays an important role in overall security. In addition, security features that integrate well with the NFV operations such as service chains, configuration, and management of VNFs can help achieve the operational efficiency and flexibility goals.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1084804519302516

Blockchain for 5G and beyond networks: A state of the art survey

Dinh C. Nguyen, ... Aruna Seneviratne, in Journal of Network and Computer Applications, 2020

4.2 Data sharing

One of the prominent characteristics of 5G is the strong data sharing capability in order to cope with the increasing content demands and data usage, especially in the 5G IoT scenarios. According to the latest release of Cisco (Cisco Visual Networking Index: Forecast and Trends and 20172022 White Paper), global mobile data traffic on the Internet will increase sevenfold between 2017 and 2022, reaching 77.5 exabytes per month by 2022. The rapid increase of content delivery over mobile 5G networks has revealed the need for new innovative data protection solutions to ensure secure and efficient data sharing over the untrusted environments (Mollah et al., 2017). In fact, sharing data in mobile networks is highly vulnerable to serious data leakage risks and security threats due to data attacks (Mahmoud et al., 2015). Mobile users tend to use information without caring about where it is located and the level of reliability of the information delivery, and the ability to control a large scale of information over the Internet is very weak. Blockchain may be an answer for such data sharing challenges. Indeed, blockchain can provide a wide range of features to improve the efficiency of data sharing in the 5G era such as traceability, security, privacy, transparency, immutability and tamper-resistance (Fan et al., 2017). To control the user access to data resources, blockchain miners can check whether the requester meets the corresponding access control policy. Due to the decentralized architecture which enables data processing for user requests over the distributed nodes, the overall system latency for data delivery is greatly reduced and the network congestion can be eliminated, which improves the performance of data sharing with blockchain.

The problem of secure storage for data sharing is considered and discussed in Li et al. (2019a). The authors leverage blockchain as an underlying mechanism to build a decentralized storage architecture called as Meta-key wherein data decryption keys are stored in a blockchain as part of the metadata and preserved by user private key. Proxy re-encryption is integrated with blockchain to realize ciphertext transformation for security issues such as collusion-attack during the key-sharing under untrusted environments. In this context, the authors in Wang et al. (2018b) study blockchain to develop a data storage and sharing scheme for decentralized storage systems on cloud. Shared data can be stored in cloud storage, while metadata such as hash values or user address information can be kept securely in blockchain for sharing. In fact, the cloud computing technology well supports data sharing services, such as off-chain storage to improve the throughput of blockchain-sharing (Zheng et al., 2018b) or data distribution over the cloud federation (Yang et al., 2018d).

In IoT networks, data transmission has faced various challenges in terms of low security, high management cost of data centre and supervision complexity due to the reliance on the external infrastructure (Liang et al., 2019). Blockchain can arrive to provide a much more flexible and efficient data delivery but still meet stringent security requirements. A secure sharing scheme for industrial IoT is proposed in Liu et al. (2018c), which highlights the impact of blockchain for security and reliability of IoT data exchange under untrustworthy system settings. In comparison to traditional database such as SQL, blockchain can provide better sharing services with low-latency data retrieval and higher degrees of security, reliability, and stronger resistance to some malicious attacks (DoS, DDoS) for data sharing. Further, the privacy of data is well maintained by distributed blockchain ledgers, while data owners have full control on their data shared in the network, improving the data ownership capability of sharing models (Lu et al., 2019c).

The work in Cech et al. (2019) also introduces a sharing concept empowered by blockchain and fog computing. The proposed solution constitutes a first step towards a realization of blockchain adoption as a Function-as-a-Service system for data sharing. Fog nodes can collect IoT data arising from private IoT applications and securely share each other via a blockchain platform which can verify all data requests and monitor data sharing behaviours for any threat detection.

Smart contracts running on blockchain have also demonstrated efficiency in data sharing services (Qian et al., 2018). Smart contracts can take the role of building a trusted execution environment so that we can establish a set of information exchange frameworks working on blockchain. For example, the study in Zhang and Chen (2019) leverages smart contracts to build a trustless data sharing in vehicular networks. The roadside units (RSU) can set the constraints for data sharing by using smart contracts which define shared time, region scope, and objects to make sure the data coins is distributed fairly to all vehicles that participate in the contribution of data. In addition, the authors of Bhaskaran et al. (2018) introduce a smart contract-based architecture for consent-driven and double-blind data sharing in the Hyperledger Fabric blockchain platform. In the system, confidential customer data can be authorized and validated by smart contracts, and the service providers can execute the data tasks, add attributes and metadata, and submit it to the blockchain for validation and recording in a transparent manner.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S1084804520301673

A comprehensive survey of hardware-assisted security: From the edge to the cloud

Luigi Coppolino, ... Luigi Romano, in Internet of Things, 2019

9.4 Adoption

TC is already used in devices, such as smartphones or tablets, and also by manufacturers of constrained chipsets and IoT devices in different fields such as industrial automation, automotive and healthcare, who are now recognizing its benefit in protecting connected things. Even Cloud Providers leverage technologies of TC for increasing security of customers’ data and improve their Service Level Agreement (SLA). The threat model of TC-based solutions fits particularly well in contexts of IoT where the application and its sensitive data reside in untrusted environments, from the field up to the cloud. TPMs such as Intel TXT or AMD PSP are mostly adopted in cloud hosts or gateways. This is due to their intrinsic characteristics, i.e., CPUs which are mainly adopted in general purpose systems. As an example, IBM announced some years ago that brought Intel TXT to its Softlayer cloud service. Even Amazon AWS provides TXT in its cloud offerings as security add-on. To protect data on edge devices, instead, there are silicon companies producing embedded solutions such as OPTIGA of Infineon or STSafe of STMicroelectronics.

For what concerns HW-assisted technologies of TEE, these are also of interest for the different entities of typical IoT deployments. SGX was initially designed to secure small applications but then several research works and companies have started to use Intel SGX for more complex workloads such as enterprise-level services or even public cloud applications [69–72]. Just recently IBM announced in its cloud offerings the possibility of deploying Intel SGX bare metal servers across all regions on IBM Cloud [73] for protecting data-in-use. Or more, Microsoft Azure presented Azure Confidential Computing [74] that uses Intel SGX to protect in a transparent fashion data processed in the cloud. Even, Alibaba cloud is proposing a cloud offering where SGX is used to ensure data-in-use security [75]. AMD MET, instead, was born specifically for public cloud environments. A number of providers in the last years such as Amazon [76], Oracle [77], Google have started to adopt the EPYC processors containing the AMD MET technology.

Finally, there are a number of applications of the TrustZone extension in edge devices [78][79]. The ARMv8-M architecture is used in different embedded devices and its TEE is of interest for Intellectual Property protection. Device makers can use TrustZone for ARMv8-M to store intellectual property in secure memory while still allowing non-secure applications to access it via APIs. They also use it for secure storage of critical information such as user data, identity information, and security keys. Or even to secure the end-to-end communication with the IoT gateway. TrustZone for ARMv8-M supports energy-conscious devices like wearables or battery-operated edge nodes in markets such as smart utilities and smart cities.

Read full article

URL: https://www.sciencedirect.com/science/article/pii/S2542660519300101

What is the relationship between the untrusted network the firewall and the Trusted network?

-The untrusted network refers to the internet. -The trusted network refers to the privately owned network. -The firewalls filters traffic from the untrusted network to the trusted network to ensure it is legitimate and not harmful.

Which security component separates a trusted network from an untrusted network?

"The firewall filters or prevents specific information from moving between the outside (untrusted) network and the inside (trusted) network."

What is the commonly used name for an intermediate area between a trusted network and an untrusted network?

(DeMilitarized Zone) A middle ground between an organization's trusted internal network and an untrusted, external network such as the Internet. Also called a "perimeter network," the DMZ is a subnetwork (subnet) that may sit between firewalls or off one leg of a firewall.

Is any device that prevents a specific type of information from moving between an untrusted network and a trusted network?

A) The firewall prevents specific types of information from moving between untrusted network and the trusted network.