The use of each type of open source license in an embedded product design imposes a unique set of obligations on the development team that is incorporating this software into their products. Because of this, some embedded computer maintain a list of open source licenses approved for use by their developers. Other companies go further, explicitly listing which specific version of each open source package has been approved for possible incorporation into the company’s embedded computer products.
Ensuring that the development team is aware of – and in compliance with – the obligations associated with each of these open source licenses takes time and effort. Tools that can help to identify and track the underlying licenses that apply and enable license obligations to be met can prove quite valuable when trying to hit aggressive solutions from product development milestones.
refer to: http://embedded-computing.com/articles/the-not-code-quality/
In all, inevitably, the embedded computer types of processors that will succeed in the future will be the SoCs that provide hardware-accelerated functions. It’s the only way that applications will be able to meet their performance-power budgets. In other words, with homogeneous SMP devices, the embedded computer performance gained by increased core count is not scalable. For example, the more cores that share a common bus structure, the more that each core must compete for memory bandwidth. This problem can be alleviated by designing chips that divide cores into clusters, where each cluster can operate autonomously if necessary.
What plans does the EEMBC have to expand its offerings in the future, and how can the industry get involved?
refer to: http://embedded-computing.com/articles/moving-qa-markus-levy-founder-president-eembc/
The 4th generation Intel® Core™ processors serve the embedded computing space with a new microarchitecture which Kontron will implement on a broad range of embedded computing platforms. Beside a 15% increased CPU performance especially the graphics has improved by its doubled performance in comparison to solutions based on the previous generation processors. At the same embedded computing , the thermal footprint has remained practically the same or has even shrunk.
Based on the 22 nm Intel® 3D processor technology already used in the predecessor generation, the processors, formerly codenamed ‘Haswell’, have experienced a performance increase which will doubtlessly benefit applications.
With improved processing and graphics performance as well as energy efficiency and broad scalability, the 4th generation Intel® Core™ processors with its new microarchitecture provide an attractive solution for a broad array of mid-range to high-end embedded applications in target markets such as medical, embedded computing, industrial automation, infotainment and military.
refer to: http://embedded-computing.com/white-papers/white-intelr-coretm-processors/
“If it is a mobile application with low to single board computer performance requirements, then Qseven is the right choice,” says Christian Eder, Marketing Manager at congatec AG headquartered in Deggendorf, Germany (www.congatec.com). “Medical systems typically require special functionalities such as ultrasonic control or high levels of isolation in order to protect patients in case of a malfunction. Standard SBCs typically do not feature that. The logical consequence is to create a custom carrier board that takes all specific functionalities and complete it with a standard COM. Once single board computer is certified, it is quite easy to upgrade or scale to other CPUs while the certification remains or just needs to be updated. This provides a lot of freedom to choose the best-fitting CPU and graphics for a given application.” This is just one example of why telehealth strategies are poised to revolutionize medicine. Telehealth not only provides quick access to specialists, but can also remotely monitor patients and reduce clinical expenses. Many of the systems needed to realize these benefits will operate on the edge, and require technology with the portability and price point of commercial mobile platforms, as well as the flexibility to perform multiple functions securely and in real time. All of this must be provided in a package that can meet the rigors of certification and scale over long lifecycle deployments.
refer to: http://smallformfactors.com/articles/qseven-coms-healthcare-mobile/
An analysis of the failure modes of DRAM in memory embedded modules has determined that DRAM components with suboptimal reliability tend to fail during the first three months of use. As newer DRAMs advance to smaller process geometries, there can be a greater risk for chips that contain weak bits (a microscopic defect in an individual cell). This is not enough to cause a DRAM failure outright, but could exhibit a single-bit error within weeks after initial field operation begins. Using Test During Burn-In (TDBI) helps eliminate any potential early failures and improve the overall reliability of memory products. Although most DRAM chips undergo a static burn-in at the chip level, TDBI offers a more comprehensive testing approach that implements a 24-hour burn-in test at the module level while dynamically running and checking test patterns as the module is performing under stress conditions. Studies conducted by various memory embedded manufacturers show that using TDBI chambers can reduce early failures by up to 90 percent.
refer to: http://embedded-computing.com/articles/ruggedization-memory-module-design/
However, once the first bank became a victim, immediately all the other institutions started to learn more about the attacks, search for solutions, then deploy those solutions quickly. When I look at military cloud security solutions, there are many vendors and partners providing tools and solutions, but not many providing availability security embedded computer attacks are hurting the availability of online services and many antivirus vendors and firewall vendors do not focus on the availability aspect.” Cloud providers find protecting the shared infrastructure can be challenging because it is an expensive up-front cost, he continues.
refer to : http://mil-embedded.com/articles/cloud-security-the-dod/
Virtualization for embedded systems has many implementations in which two or more operating systems coexist to gain the benefits of each. One approach puts Microsoft Windows and a Real-Time Operating System (RTOS) together.
Much is being said about virtualization these days in the softwareworld. Simply stated, virtualization is about getting multiple OSs to run on the same computing platform at the same time. Virtualization has been cited as a key technology for getting the most performance out of the newest multicore processors. But just as not all computing applications are the same, not all virtualization approaches are appropriate for all applications.
Embedded Systems have a key requirement that doesn’t normally apply to office and server computers: the need for deterministic response to real-time events. To support the requirement for determinism, embedded applications typically use RTOSs. Embedded applications also employ general-purpose OSs to handle operator interfaces, databases, and general-purpose computing tasks.
In the past, because OSs couldn’t successfully co-reside on computing platforms, system developers employed multiple processing platforms using one or more to support real-time functions and others to handle general-purpose processing. System designers that can combine both types of processing on the same platform can save costs by eliminating redundant computing hardware. The advent of multicore processors supports this premise because it is possible to dedicate processor cores to different computing environments; however, the software issues posed by consolidating such environments require special consideration. Combining real-time and general-purpose operating environments on the same platform (Figure 1) places some stringent requirements on how virtualization is implemented.
3.Hardware-aided embedded virtualization
4.Leveraging Intel Architecture
5.Embedded virtualization saves costs
AMB-D255T1, which carries the Intel dual- core 1.86GHz Atom Processor D2550. AMB-D255T1 features powerful graphic performance via VGA and HDMI, DDR3 SO-DIMM support, mSATA socket with USB signals and SIM slot, and a DC jack for easy power in. AMB-D255T1 also provides complete I/O such as 4 x COM ports, 6 x USB2.0 ports, 1 x GbE RJ-45 port, 1 x SATA port with power connector.
AMB-D255T1 can support dual displays via VGA, HDMI or LVDS. AMB-D255T1 has one MiniPCIe type expansion slot with SIM card socket for customer’s expansion.
In vehicle computer, single board computer, Industrial PC
A Wi-Fi specification, (IEEE-802.11 a/b/g/n) is a commonly adopted technology for in-vehicle computer products. These are AP (Access Point) Mode and Station Mode for the Wi-Fi technologies. In general, the Wi-Fi module can only support Station Mode which allows user to connect to a local Access Point in range; however, acrosser has studied and reviewed numerous field applications and has took into consideration these results when the company designed their in-vehicle computer products, such as, the AR-V6100FL and AR-V6002FL. Acrosser Wi-Fi modules can support both AP Mode and Station Mode, allowing the user to switch mode during operation.
In vehicle computer, single board computer, Industrial PC
AR-V6005 & AR-V6100 support the optional GPS/GPRS/WiFi module inside one compact system, to fulfill the highly demand from telematic applications. In addition, acrosser In-Vehicle PC has excellent mechanical design to adapt high environment endurance that is certified to operate under Vibration 3G (follow IEC60068-2-64) and shock 50G 11ms (follow IEC60068-2-27), and fully compliant with in-vehicle computer application such as E -Mark Certification (E-13).