Everything You Need to Know About Semiconductors

BY MEGAN RAY NICHOLS

Technology is an integral part of our daily lives. We talk to friends and family, stay connected with colleagues, socialize and work from home on our computers and smartphones. Different types of technology show up in nearly every sector, from food to education to manufacturing and everything in between. The innovations that we know and love today rely on one thing to keep them moving forward — semiconductors 

What are semiconductors? How do they work, and where might you find them as part of your everyday life?

What Are Semiconductors?

When you plug your charging cable into the wall, how does it work? Electricity travels from the socket and through a copper wire until it reaches your phone or another battery-powered device. This setup works because the copper inside your charging cable is known as a conductor — it transfers electricity well and under nearly any circumstance. Copper and other conductors like steel and aluminum have very little resistance and allow electricity to flow unimpeded from source to destination. 

Semiconductors, on the other hand, act as insulators most of the time. They have higher resistance and don’t transmit electricity easily or at all. On the other end of the spectrum are the insulators — materials that don’t conduct electricity at all. You’ll often find these surrounding conductors and semiconductors to prevent the power from arcing out and potentially causing harm. Rubber and carbon are both standard insulators.

If they don’t transmit electricity but only work as an insulator some of the time, why are semiconductors essential in the creation of technology? How do they work?

Continue reading Everything You Need to Know About Semiconductors

How Is Machine Learning Changing Education?

By Megan R. Nichols

Artificial intelligence (AI) is no longer science fiction, and it may be more abundant than you realize. AI already powers things from Netflix recommendations to smart speakers to autopilot in airplanes, and it’s only going to grow from here.

More companies are beginning to take advantage of this technology, as AI adoption rose 270% in just four years. Soon we’ll see this technology used across all industries.

One promising area for machine learning is in human education. Teaching is a vital yet complex profession. Discerning each student’s educational needs can be a complex challenge, and schools may not always have the ability to meet it. AI can help.

Machine learning has a knack for solving complicated problems that have proved too difficult for humans. This problem-solving ability can find ways to improve education for students of all ages and skill levels.

Personalized Learning

One of the most substantial challenges facing educators is that different students require different teaching strategies. Some may grasp certain subjects at a different rate than others or respond better to another learning style. Teachers may not be able to determine the specific needs of each student — much less have the time to address them all.

Continue reading How Is Machine Learning Changing Education?

Innovations Expected To Make Driving A Worthy Experience

BY JACKIE EDWARDS

The demand for automotive engineers was set to rise by 18700 in 2018, according to a survey by the Recruiter. This is not a shocking statistic considering that the automotive industry is one of the greatest markets in the world. The constant application of science to daily life in the form of innovations like machine learning is once again showing up in the manufacturing of cars. Cars are getting sleeker, more intelligent, and highly accommodating with each update. New innovations continue to improve the driving experience now and for the future.

Autonomous Vehicles

Also called self-driving cars, autonomous vehicles are finally here after decades of research and test drives. A self-driving car means you can have a hands-off experience on the highway. With the latest car diagnostic tools, it is only fair you have the best car to test them on. The level 3 automated driving Audi A8 is just the first of many. The science behind this amazing feature is a combination of sonar, GPS, radar, laser scanners, odometry, Lidar, and inertial measuring units. A combination of these features ensures that the car senses its environment, including the road structure, other road users, and approaching cars to adjust its speed. It may be a while before these cars are allowed on roads without a driver though.

Biometric Vehicle Access

Gone are the days when a key could help you access your car, and so are the days when you could break into a car using a hanger. The existing radio frequency key fob technology is awesome, but biometric vehicle access is even cooler. During its launch in the 2018 North American Auto Show, the Nissan XMotion showcased its fingerprint scanner that opens the door. Your car basically starts when you touch it. Biometric technology is already used in connected cars, and will be seen in more mainstream and futuristic cars in the coming years.

Continue reading Innovations Expected To Make Driving A Worthy Experience

How Augmented Reality Invaded the Banking and Food Industry

“55% of consumers would like to be able to point their phone at any object and receive information about it.”
                              ~ Research from Mindshare

There was a time when television was the best tool to make viewers feel the physical experience of something from their bed. And today augmented reality (AR) has set the stage for marketers to enhance the interaction with their target customers in a really different and effective manner. It engages the users to deliver a more compelling experience that can boost your product offering.

Every business is working on some form of augmented reality today for their product or brand enhancement. It is expected that by the year 2024, AR products global market will take a jump by 80% and will reach $165 billion. Augmented reality engages the customers and enhances the customer experience.

With the help of this blog, we will present a few examples of businesses who are successfully engaging their customers via augmented reality throughout their customer journey.

Banking becoming More Interactive

No matter how exceptional or efficient is the system of any bank, the most important thing that a user expects is the end result – an outstanding experience.

The banking industry has embraced digital technology with an aim to boost customer service. Internet banking, mobile banking, and pay banking are a few technologies that banks are providing to their customers. But how exceptional it will be if banks start to provide the detailed information about the account balance, payment due dates, credit/debit card balance, and so on just by scanning the card of the customer.

Yes, all the above-mentioned features are possible today with the help of this extremely useful technology in the banking sector – Augmented Reality. Banking sectors, today, are constantly shifting towards using AR technology to enhance the experience for customers

Continue reading How Augmented Reality Invaded the Banking and Food Industry

How Computer Science is Revolutionizing the Housing Market

by Jackie Edwards

House sales in the US are surpassing all expectations, reaching a high since last November and jumping 6.7% between April and May of this year. This is of course down to a number of factors, but one that is often overlooked is the role of technology. With rapid advances in computer science, the way we find and purchase a new house is changing. Some of the changes involve the boring, but important financial stuff, while others are a bit more exciting. All taken together though, technological progress could explain why home ownership is increasing across the country.

Easier Transactions

Back in the days when cash was all we had, buying a house was a difficult process. Even sending checks in the mail was a burden that slowed everything down. This made the purchase of a property extremely cumbersome. With the digitization of money, we are overcoming this obstacle. Bank transfers can now be made in an instant and monthly payments can be sent automatically.

The US is behind in the adoption of online payments, however. While just 3% of Americans have used contactless payment in the past month, this number rises to 54% among British consumers. Digital transactions are so popular across the pond that HM Land Registry (the government department which registers property ownership) aims to digitize and automate 95% of transactions by 2022. It’s all about efficiency, so homes can be bought and registered quickly and cheaply.

Cutting Paperwork and Middlemen

With a similar goal in mind, tech is helping to change property purchases by cutting out unnecessary stages in the buying process. The internet allows direct communication between landlords and tenants, as well as house sellers and buyers. The role of the estate agent is being replaced by a computer algorithm, meaning fewer people to deal with and, in turn, fewer people to pay. Buying a house online requires no paperwork or even a need to schedule a meeting. The terms and regulations of the transaction can be emailed as a PDF attachment and returned signed within minutes.

Virtual Viewings

So that’s the boring (but important) stuff out the way, but how is technology making home buying fun again? One way is with the incorporation of virtual reality. One of the problems with purchasing a property is that it must be viewed multiple times by potential buyers, which means coordinating an appropriate time between house hunter, real estate agent and the current occupiers of the home. To do this several times, when people are busy with work and social commitments can cause difficulties. Airbnb is leading the way in solving this problem, by aiming to create virtual viewings of their homes, with the help of headsets to reduce the need to travel. This is especially useful for properties in a far away town or overseas.

Computers are affecting almost every area of society, but the housing market could be most impacted. It is now easier than ever to buy a home, with costs being cut to make it more affordable for those on low incomes. The ease at which payments can be made combined with the chance for virtual viewing increases convenience for everybody involved.

How is Science Sculpting the Modern Athlete?

by Jackie Edwards

Sport is big business these days, with the market worth $60.5 million in North America and predicted to rise to $73.5 billion in 2019. Sports is not only a moneymaker for event promoters and the media; it is also increasingly being seen as a top career choice for those with the talent, drive, and commitment required to succeed. New developments in sport have shown that success is not all about the individual athlete. In popular sports like tennis, football, or golf, science & technology are playing an important role in helping competitors perform at their full potential. In this post, we look at just a few ways that science is changing the way we play and compete.

Swing Training Technology for Golf

You would need to be a master physicist to work out the exact angle at which to position your club when playing golf, but science and technology are making it a whole lot easier with swing training technology, which brings real-time body positioning analysis to everyday golfers with the help of a handy app. The app ‘tells’ golfers exactly how to position their body and gives them top information on how to do better next time. Of course, the app won’t fix deeper problems such as weak muscles in the shoulder and back. Top level athletes will also need to regularly carry out specific training programs for golf, which include strength training for key muscle groups. In essence, performing the right swing depends on issues like back strength, so you may need to address this first to perfect your game.

Head and Neck Support for Motor Sports

Dale Earnhardt’s death on the track at the Daytona 500 race revealed the extent to which the head and neck area are vulnerable in motor sports. HANS devices have been created by scientists to stop the head from whipping forwards and backwards in the event of an accident, and to lend more support to the neck. The device is U-shaped and is positioned behind the neck, with two ‘arms’ that extend over the pectorals. Over 140,000 devices have already been sold worldwide.

Wearable Computers and Hawk-Eye Camera Systems

Wearable computers are allowing both players and managers to assess a player’s level of fatigue, hydration levels, etc. This type of information is vital to avoid heart attacks and other major health events from taking place on the field. Smart fabrics will enable athletes to glean even more information, including heart function data and movement of the body’s center of mass. Scientists have stated that the future could take us beyond wearables. The Hawk-Eye camera system is currently used to obtain information on running biomechanics and other metrics during games of elite players. The NBA, meanwhile, relies on Second Spectrum’s computer vision technology to obtain information about player positioning and other 3D data such as ball and referee positioning.

We have presented just a few ways in which science and technology are enabling athletes to perform more optimally, but also to stay safe. Wearable devices and fabrics, aerial camera systems, and new safety gear are making sport a much more scientifically accurate and appealing pursuit. Information is power, and nowhere is this truer than on the field or track.

Health Benefits of Gaming

by Marcus Clarke

Apparently it was the Buddha who first said ‘health is the greatest gift’. He was certainly right. However, he probably never imagined that one of the ways we can receive this ‘greatest gift’ is video games, as much research is now showing that gaming can be extremely good for your health, in a bewildering number of ways.

For example, gaming can increase the strength and size of the brain areas associated with a number of key skills, such as motor skills and spatial awareness. So gaming can actually increase the size of your brain! As well as this, gaming can reduce sensations of pain. Studies on soldiers who had been injured in battle, in which half were asked to play on a virtual reality game and half acted as a control group, found that those who played video games were less likely to need pain meds. Amazing! There’s more! Gaming can slow down cognitive decline in the elderly and those that are suffering from degenerative neurological disorders. As such, gaming in this sense has many public health applications.

Have you recently suffered from some sort of trauma? Fear not. Gaming may be the answer, and research has shown that gaming can minimize the effects of trauma. One study showed, for example, that those who had recently undergone surgery and played video games were likely to recover more quickly. Likewise, those who had undergone traumatic events were likely to have fewer flashbacks and after effects if they played video games.

So, gaming is not just about fun, and vegging out on a weekend after a hard week at work. It can actually have really positive effects, particularly on your cognitive health. Likewise, modern games are increasingly good for your cardio health, as games become more active, and your body becomes the controller.

To find out more about how gaming can be good for your, see the infographic below from Computer Planet.

Continue reading Health Benefits of Gaming

Simple Steps for IIoT Cloud Security

BY MEGAN RAY NICHOLS

The Industrial Internet of Things (IIoT) makes it easier than ever to track and analyze data, integrate multiple different hardware platforms and achieve next-gen connectivity. While it serves as a one-stop shop for many manufacturers, some find it difficult to maintain proper security. Facing threats from all angles, it’s impossible to safeguard your system against every possible cyber-attack. You can, however, take some steps to ensure your initial preparedness and bolster your reaction time in the event of an intrusion.

Monitoring Evolving Industry Standards

Despite its usefulness, the IIoT is anything but standardized. Much of the technology powering the platform is still in its infancy, so the ultimate potential of the IIoT is subject to future breakthroughs and innovations in general IT. This makes it difficult to adopt standards for network security, cloud access and IIoT integration – but that hasn’t stopped some organizations from trying.

Make sure to research the security systems of any cloud services or IIoT devices you incorporate within your company to make sure you receive the quality protection you deserve. Companies tend to use unique strategies to ensure security across their networks, so it’s important to find one that aligns with your needs, requirements and expectations. Although there isn’t a strict protocol for processing and securing such vast amounts of data, the International Electrotechnical Commission (IEC) recently established ISA99 standards for industrial automation and control systems security.

But ISA99 is also a work in progress. A part of the larger IEC 62443 series of regulations and codes, the IEC hopes to usher in a new age of security and efficiency throughout the entire industry.

Establishing Your Own Best Practices

It’s important for manufacturers to develop their own best practices in regards to IIoT technology. Not only does this help you to maintain acceptable standards of data collection, storage and security for the time being, but it also enables you to retain the option of transitioning over to new industry regulations as they develop.

The process of establishing your own best practices for IIoT integration depends on your unique requirements. Will your connected devices communicate via Bluetooth or a cellular connection? Do you have legacy hardware, such as tape backup, which currently holds your company’s critical data? Answering these questions is the first step in creating standards for IIoT integration.

Next, consider how your employees will access the cloud and your IIoT network. The rising popularity of smartphones and mobile devices has prompted some to embrace the bring-your-own-device (BYOD) model of connectivity. Others would rather limit access to the desktop computers and workstations around the factory.

Identifying and outlining your exact needs is critical when balancing network accessibility with cloud security, and it makes the process of safeguarding your system as straightforward and simple as can be.

Implementing Security to Protect Your Data

The final step in achieving IIoT cloud security requires you to introduce the systems that will secure your network. Manufacturers use various tools to protect their data, including encryption, file signatures and firewalls.

Keep in mind that you’re protecting your digital assets from external and internal threats. By placing all the focus on counteracting and preventing cyber-attacks, it’s easy to lose track of employees who might have physical access to your IIoT cloud. This is where user access privileges, consistent system administration and strong password requirements are helpful.

Creating a Security Model That is Versatile, Flexible and Scalable

It’s also important to develop a security model that is adaptable to future trends and innovations. Hardware regarded as groundbreaking today will be replaced by newer, upgraded versions within the coming years. Likewise, hackers and cybercriminals are always devising new and innovative ways to access vulnerable systems and take advantage of weaknesses before they’re patched.  It’s a never-ending tug of war that requires a lot of diligence on behalf of your IT team because the success of your company might depend on it.

The Cocktail Party Effect

Introduction

The term cocktail party effect was coined by a British Cognitive scientist Colin Cherry, in the 1950s. He was interested in understanding how people listened, by conducting a few experiments. In his first experiment, he played two different overlapped messages recorded in the voice of the same person, through headphones. The participants were asked to listen carefully and try to write one of the messages on paper. If they put in enough concentration, the participants usually succeeded.

Now, if someone asks you to describe the cocktail party effect. The formal Cocktail Party effect definition is as follows:

Cocktail Party Effect Definition:

The cocktail party effect is the phenomenon of being able to focus one’s auditory attention on a particular stimulus while filtering out a range of other stimuli, much the same way that a partygoer can focus on a single conversation in a noisy room. Continue reading The Cocktail Party Effect

Everything You Should Know About Machine Learning

Programming Computers: Then and Now

I find it fascinating that today you can define certain rules and provide enough historical data to a computer, reward it for reaching closer to the goal and punish it for doing bad, which will get it trained to do a specific task. Based on these rules and data, the machine can be programmed to learn to do tasks so well that we humans have no way of knowing what steps it is explicitly following to get the work done. It’s like the brain, you can’t slice it open and understand the inner workings.

The days when we used to define each step for the computer to take are now numbered. The role we played back then, of a god to the computers has been reduced to something like that of a dog trainer. The tables are turning from commanding machines to parenting them. Rather than creating code, we are turning into trainers. Computers are learning. It has been called machine learning, for quite a while now (defined in 1959 by by Arthur Samuel). Other names being artificial intelligence, deep simulation or cognitive computing. However now, it really has picked up and based on the amazing things it can help computers do now, it is clearly going to be the future of what the IT industry will transform into.

Continue reading Everything You Should Know About Machine Learning

Making 3D Plots in OriginLab OriginPro

When I first wanted to use OriginPro to make 3D plots for my Raman and photoluminescence data I struggled to find a solution for little problems for hours. It was not like I had no one to consult, but I wanted to figure it out myself because the same philosophy with learning things previously has helped me immensely in learning things with authority and developing my own characteristic style, by putting in an extra amount of time and hard work.

Check with your university, there’s a big chance that they offer you students a free copy of OriginPro. It is an amazing piece of software, easy to learn and can make plots beautiful enough to be published in Science or Nature.

However when I was toiling through things, I also wished someone who had gone through the same thing could have documented it somewhere. Not surprisingly, no one had. So I wanted to. Here it goes.

Continue reading Making 3D Plots in OriginLab OriginPro

How do Court Reporters Type Incredibly Fast?

By Anupum Pant

I’ve always heard about short-hand, but I never cared to look it up and how it actually works. I had assumed that it must be very similar to what we type and it was a way to make your tyiping faster. Turns out, I was wrong. It’s very different.

Whatever happens in the court goes on record. There’s no computer doing the speech to text there. It’s humans. These people are trained to type about 200 words per minute and can manage an accuracy of 98.5%. That’s pretty incredible. But how they do it is a different story.

stenoThey use a different keyboard which has just 22 keys. There’s no full body QWERTY keyboard and it looks something like this.

Instead of typing down the whole word, they listen to how it sounds. The context doesn’t even matter to them. They just record the sounds. A long word can be completed in just a few strokes with their technique.

via [todolivas]

Mastering The Best Useless Skill – Reading Text in Binary

By Anupum Pant

The next time you see a series of 0s and 1s, you will no longer need to take it to a computer and feed it in to read it. Of course you might never have to read a text in binary, and that is the reason this might be the most useless skill you could master right away. I’m doing it anyway.

Tom Scott from YouTube  recently posted a video on YouTube where he teaches you how to read text written in binary. It’s fairly easy. The only thing you need to practice, if you don’t already know it, is the number that is associated with each alphabet (Like it’s 1 for A and 2 for B and so on).

via [ScienceDump]

The Langton’s Ant

By Anupum Pant

Think of a cell sized ant sitting on a huge grid of such white cells. The thing to note about this ant is that it follows a certain sets of simple rules. The main rule is that when the ant exits a cell, it inverts the colour of the cell it just left. Besides that:

  1. If the ant enters a white square, it turns left.
  2. If it enters a black square, it turns right.

Here’s what happens if the ant starts out in the middle and moves to the cell on the right, as a starting step (this can be on any side).

First step, it goes to the right.
First step, it goes to the right.

Enters a white cell and rule 1 kicks in. The exited cell is inverted in colour and it turns left.
Enters a white cell and rule 1 kicks in. The exited cell is inverted in colour and it turns left.

Enters a white cell and rule 1 kicks in. The exited cell is inverted in colour and it turns left. (Again)
Enters a white cell and rule 1 kicks in. The exited cell is inverted in colour and it turns left. (Again)

Enters a white cell and rule 1 kicks in. The exited cell is inverted in colour and it turns left. (Again)
Enters a white cell and rule 1 kicks in. The exited cell is inverted in colour and it turns left. (Again)

Enters a black cell and rule 2 kicks in. The exited cell is inverted in colour and it turns right.
Enters a black cell and rule 2 kicks in. The exited cell is inverted in colour and it turns right.

Rule 1 again and so on...
Rule 1 again and so on…

Now as this continues, a seemingly random figure starts taking shape. The black cells are in total chaos, there seems to be no specific order to how they appear on the canvas. (of course the pattern is always the same chaos, considering the ant starts on a blank array of cells).

And yet, after about 10,000 steps are completed by the turing ant, it starts creating a very orderly highway kind of figure on the canvas. It enters an endless loop consisting of 104 steps which keeps repeating for ever and creates a long highway kind of structure.

Suppose, initially you take a configuration of black spots on a canvas (not a blank white canvas). Take an array of cells with randomly arranged black spots, for instance. If given enough time, the ant ultimately always ends up making the looped highway. However, before it starts doing it, it might take a significant amount of steps less, or more, than the ~10,000 steps it took to reach the loop in a blank array of cells.

No exception has ever been found. A computer scientist Chris Langton discovered this in the year 1986.

Scientifically, Do Retina Displays Make Sense?

By Anupum Pant

Our eye doesn’t work like a camera – with pixels and frame rates. It moves rapidly in small amounts and continuously updates the image to “paint” the detail. Also, since we have two eyes, both the signals are combined by the brain to increase the resolution further. Due to this, a much higher resolution image than possible with the eye’s abilities, can be generated in the brain. The very fact that we haven’t been able to come up with artificial devices that work the way a human eye does, confirms that we haven’t been completely able to understand this complex device yet.

But what we know about the average human eye is that its ability to distinguish between two points is measured to be around 20 arcsecs. That means, two points need to subtend an angle of at least 0.005 degrees to be distinguished by the human eye. Points lying any closer than that would mean that the eye would see it as a single point.

One thing needs to be noted that if an object subtends 0.005 degrees when it lies 1 foot away, it will subtend a lesser angle as it moves away. This is the reason you have to bring tiny text closer in order to read it. Bringing it closer increases the angle it subtends, only then the eye is able to resolve individual letters. Or in other words, anything is sharp enough if it is far enough.

Apple Science

Retina display, the Apple’s flagship display is said to be so sharp that the human eye is unable to distinguish between pixels at a typical viewing distance. As Steve Jobs said:

It turns out there’s a magic number right around 300 pixels per inch, that when you hold something around to 10 to 12 inches away from your eyes, is the limit of the human retina to differentiate the pixels. Given a large enough viewing distance, all displays eventually become retina.

Basically, Apple has done science at home and has come out with a nice number, 300 PPI. Practically, you don’t need anything higher than that. Technically, you do.

Isn’t “more” better?

No one is really sure. According to my calculations, an iPhone 5s’s display (3.5X2 in) would subtend 13.3 degree X 7.6 degrees from a 15 inch distance. With the kind of resolving power our eye sports, you’d need a screen that is able to display 4 megapixels on that small screen. Or in layman words, you need a screen that can pack around 710 PPI; practically, that sounds a bit too extreme (or maybe my calculations are wrong, please point it in the comments). I’d go with Steve Job’s calculation.

My shitty screen is a retina display

So, technically any device can said to be sporting the most touted screen in the industry today – a retina display – if it is kept at a sufficient distance. For instance, my laptop’s monitor with a resolution less than one quarters (~110 PPI) of what we see on today’s devices becomes a retina display when I use it from a distance of about 80 cm. 80 cm is normally also the distance I use my laptop from. Also, even doctors consider 50-70 cm as an optimum distance from screen to eye, to avoid eye strain.

On my shitty screen, the pixels are at a distance of 0.23 mm from center to center. And at 80 cm, my eye is practically unable to see the difference between a retina display and a shitty display. So, I say, do you really need higher and higher PPI devices? But that is just my opinion.

My Shitty phone is a retina display

As phones are generally used from a much closer distance, they require a higher PPI for the screen to look crisp. My phone, Lumia 520 has a 233 PPI screen. It becomes a retina display after a distance that is anything more than 15 inches. I’m required to hold my phone at 4 inches more than an iPhone to turn it into a display which is as good as an iPhone’s. Do I bring my phone any closer for anything? No. Do I need a higher PPI? No.

Conclusion

Recent phones from Samsung, Nokia and HTC pack in 316, 332 and 440 ppi, etc or more. Companies are spending billions to decrease the distance between their pixels. Sony, for instance, has recently come up with a 440 PPI display. And now, we have 4K TVs. Practically, I’d say, put an end to this manufacturer pissing contest and use this money for something more worthwhile. Technically, according to calculations, I say that we yet have to develop far more complicated technologies to cram in more pixels for pleasing the human eye.

Enhanced by Zemanta