Your Machine Vision System Color Blind?

Color machine vision has its challenges.
Systems can produce three times the data (or less than one-third the resolution) of a monochrome camera solution. Color can introduce more potential sources for imaging errors, more complexity, more cost, and require careful engineering that reduces the system’s flexibility to deal with lines that make products of varying shape, colors, and size. In fact, if designers can find a way to use filters and lighting to measure a colored area using monochrome cameras, they usually do.

Today, printed circuit boards require more color vision solutions because the color of a component helps to identify each part. Plugs and connectors are color coded, and at the same time, the board is tracked using a black-and-white barcode. “These applications used to be done with a high-resolution monochrome camera, but now, you need to be able to sense color to make sure the right component and connector are in the right place,” Kinney explains. “The barcode will usually be located at the edge of the frame. If you use a single-chip color camera, you have to be concerned about color shading and halos at the edge of the image, and it’s made worse if you use cheap optics.”

refer to:http://www.visiononline.org/vision-resources-details.cfm/vision-resources/Is-Your-Machine-Vision-System-Color-Blind/content_id/4333

Business Through Remote Collaboration

Operational Benefits: Significant advantages included:

A single prioritized view of well operations
Real-time analysis capability for production data
Real-time feedback on well performance
Improved production and forecasting accuracy
Quick implementation as available out-of-the-box
Easily supportable and maintainable monitoring solution
Conformance and integration with corporate standards
But What Really Matters: This solution has facilitated better decision making, helping experts to take the right action at the right time to solve problems, take advantage of opportunities and improve well performance … but so what?

In this particular case, the bigger-picture business goal was time to first oil enabled by an out-of-the-box, customized solution. Even bigger than that, though, is that the refiner estimates a 4-to-6 percent production increase with real-time data networking and analysis.

The Situation: A leading global producer of crude oil and natural gaslooked for a way to stay ahead of dynamic market demands and overcome challenges associated with offshore oil and gas Automation. As part of an innovative technology project and with the help of Honeywell, this company built a Solutions to help coordinate control of multiple offshore platforms in the North Sea, and improve operations and efficiency.

refer to:

http://www.automation.com/business-transformation-through-remote-collaboration-optimization-and-operations

 

DIY pushes open hardware from kindergarten to Kickstarter

Resurgence of the Do It Yourself (DIY) community has driven a range of open networking platforms, giving aspiring technologists cheap and easy access to embedded development. Outside of hobbyist toys and educational devices, however, “hacker” boards are increasing performance and I/O flexibility, and have become viable options for professional product development.

Kickstarter projects like Ninja Blocks are shipping Internet of Things (IoT) devices based on the BeagleBone (see this article’s lead-in photo), and startup GEEKROO is developing aMini-ITX carrier board that will turn the Raspberry Pi into the equivalent of a PC. Outside of the low barrier to market entry presented by these low-cost development platforms, maker boards are being implemented in commercial products because their wide I/O expansion capabilities make them applicable for virtually any application, from robotics and industrial control to automotive and home automation systems. As organizations keep enhancing these board architectures, and more hardware vendors enter the DIY market, the viability of maker platforms for professional product development will continue to increase.

refer to:

http://embedded-computing.com/articles/diy-pushes-open-hardware-kindergarten-kickstarter/

Leveraging IT Technology for industrial controls

With that said, the controls world is going to be moving with anautomation that has a definite consumer bias, with product development and release cycles of six months or less. In an industry where the average life expectancy of an automotive production line is eight years, it is impossible to expect the networking in an industrial setting to keep up with modern IT standards. Therefore, we turn our attention to the technologies that have existed the industrial, with the most open standards and the very best support. These are the protocols we wish to use and keep, and this article highlights and explains some of these technologies. This article does not focus on the technical implementations of each piece of technology. Rather, it is assumed the reader will be using packaged solutions such as a function block for a PLC.

refer to:
http://www.automation.com/leveraging-it-technology-for-industrial-controls-applications

Source licensing for embedded products

The use of each type of open source license in an embedded product design imposes a unique set of obligations on the development team that is incorporating this software into their products. Because of this, some embedded computer maintain a list of open source licenses approved for use by their developers. Other companies go further, explicitly listing which specific version of each open source package has been approved for possible incorporation into the company’s embedded computer  products.

Ensuring that the development team is aware of – and in compliance with – the obligations associated with each of these open source licenses takes time and effort. Tools that can help to identify and track the underlying licenses that apply and enable license obligations to be met can prove quite valuable when trying to hit aggressive solutions  from product development milestones.

refer to: http://embedded-computing.com/articles/the-not-code-quality/

Steering the embedded market

MOST150 enables the use of a higher bandwidth of 150 Mbps, an isochronous transport mechanism to support extensive video applications, and an embedded Ethernet channel for efficient transport of IP-based packet data. It succeeds in providing significant speed enhancements and breakthroughs while keeping  In-Vehicle computers to costs down.

It succeeds in providing significant speed enhancements and breakthroughs while keeping costs down. The new Intelligent Network Interface Controller (INIC) architecture complies with Specification Rev. 3.0 and expands the audio/video capability for next generation automotive infotainment devices such as Head Units, Rear Seat Entertainment, Amplifiers, TV-Tuners and Video Displays.

refer to: http://embedded-computing.com/news/most150-series-adoption/

Industrial pc and embedded computer

In all, inevitably, the embedded computer types of processors that will succeed in the future will be the SoCs that provide hardware-accelerated functions. It’s the only way that applications will be able to meet their performance-power budgets. In other words, with homogeneous SMP devices, the embedded computer performance gained by increased core count is not scalable. For example, the more cores that share a common bus structure, the more that each core must compete for memory bandwidth. This problem can be alleviated by designing chips that divide cores into clusters, where each cluster can operate autonomously if necessary.

What plans does the EEMBC have to expand its offerings in the future, and how can the industry get involved?

refer to: http://embedded-computing.com/articles/moving-qa-markus-levy-founder-president-eembc/

 

The attachment between embedded computers and Intel processor

The 4th generation Intel® Core™ processors serve the embedded computing space with a new microarchitecture which Kontron will implement on a broad range of embedded computing platforms.  Beside a 15% increased CPU performance especially the graphics has improved by its doubled performance in comparison to solutions based on the previous generation processors. At the same  embedded computing , the thermal footprint has remained practically the same or has even shrunk.

Based on the 22 nm Intel® 3D processor technology already used in the predecessor generation, the processors, formerly codenamed ‘Haswell’, have experienced a performance increase which will doubtlessly benefit applications.

With improved processing and graphics performance as well as energy efficiency and broad scalability, the 4th generation Intel® Core™ processors with its new microarchitecture provide an attractive solution for a broad array of mid-range to high-end embedded applications in target markets such as medical,  embedded computing, industrial automation, infotainment and military.

refer to: http://embedded-computing.com/white-papers/white-intelr-coretm-processors/

Future blueprint for medical embedded SBC


“If it is a mobile application with low to single board computer performance requirements, then Qseven is the right choice,” says Christian Eder, Marketing Manager at congatec AG headquartered in Deggendorf, Germany (www.congatec.com). “Medical systems typically require special functionalities such as ultrasonic control or high levels of isolation in order to protect patients in case of a malfunction. Standard SBCs typically do not feature that. The logical consequence is to create a custom carrier board that takes all specific functionalities and complete it with a standard COM. Once single board computer is certified, it is quite easy to upgrade or scale to other CPUs while the certification remains or just needs to be updated. This provides a lot of freedom to choose the best-fitting CPU and graphics for a given application.” This is just one example of why telehealth strategies are poised to revolutionize medicine. Telehealth not only provides quick access to specialists, but can also remotely monitor patients and reduce clinical expenses. Many of the systems needed to realize these benefits will operate on the edge, and require technology with the portability and price point of commercial mobile platforms, as well as the flexibility to perform multiple functions securely and in real time. All of this must be provided in a package that can meet the rigors of certification and scale over long lifecycle deployments.

refer to: http://smallformfactors.com/articles/qseven-coms-healthcare-mobile/

DRAM failure prevention from solution support

An analysis of the failure modes of DRAM in memory embedded modules has determined that DRAM components with suboptimal reliability tend to fail during the first three months of use. As newer DRAMs advance to smaller process geometries, there can be a greater risk for chips that contain weak bits (a microscopic defect in an individual cell). This is not enough to cause a DRAM failure outright, but could exhibit a single-bit error within weeks after initial field operation begins. Using Test During Burn-In (TDBI) helps eliminate any potential early failures and improve the overall reliability of memory products. Although most DRAM chips undergo a static burn-in at the chip level, TDBI offers a more comprehensive testing approach that implements a 24-hour burn-in test at the module level while dynamically running and checking test patterns as the module is performing under stress conditions. Studies conducted by various memory embedded manufacturers show that using TDBI chambers can reduce early failures by up to 90 percent.

refer to: http://embedded-computing.com/articles/ruggedization-memory-module-design/