Simple Steps for IIoT Cloud Security

BY MEGAN RAY NICHOLS

The Industrial Internet of Things (IIoT) makes it easier than ever to track and analyze data, integrate multiple different hardware platforms and achieve next-gen connectivity. While it serves as a one-stop shop for many manufacturers, some find it difficult to maintain proper security. Facing threats from all angles, it’s impossible to safeguard your system against every possible cyber-attack. You can, however, take some steps to ensure your initial preparedness and bolster your reaction time in the event of an intrusion.

Monitoring Evolving Industry Standards

Despite its usefulness, the IIoT is anything but standardized. Much of the technology powering the platform is still in its infancy, so the ultimate potential of the IIoT is subject to future breakthroughs and innovations in general IT. This makes it difficult to adopt standards for network security, cloud access and IIoT integration – but that hasn’t stopped some organizations from trying.

Make sure to research the security systems of any cloud services or IIoT devices you incorporate within your company to make sure you receive the quality protection you deserve. Companies tend to use unique strategies to ensure security across their networks, so it’s important to find one that aligns with your needs, requirements and expectations. Although there isn’t a strict protocol for processing and securing such vast amounts of data, the International Electrotechnical Commission (IEC) recently established ISA99 standards for industrial automation and control systems security.

But ISA99 is also a work in progress. A part of the larger IEC 62443 series of regulations and codes, the IEC hopes to usher in a new age of security and efficiency throughout the entire industry.

Establishing Your Own Best Practices

It’s important for manufacturers to develop their own best practices in regards to IIoT technology. Not only does this help you to maintain acceptable standards of data collection, storage and security for the time being, but it also enables you to retain the option of transitioning over to new industry regulations as they develop.

The process of establishing your own best practices for IIoT integration depends on your unique requirements. Will your connected devices communicate via Bluetooth or a cellular connection? Do you have legacy hardware, such as tape backup, which currently holds your company’s critical data? Answering these questions is the first step in creating standards for IIoT integration.

Next, consider how your employees will access the cloud and your IIoT network. The rising popularity of smartphones and mobile devices has prompted some to embrace the bring-your-own-device (BYOD) model of connectivity. Others would rather limit access to the desktop computers and workstations around the factory.

Identifying and outlining your exact needs is critical when balancing network accessibility with cloud security, and it makes the process of safeguarding your system as straightforward and simple as can be.

Implementing Security to Protect Your Data

The final step in achieving IIoT cloud security requires you to introduce the systems that will secure your network. Manufacturers use various tools to protect their data, including encryption, file signatures and firewalls.

Keep in mind that you’re protecting your digital assets from external and internal threats. By placing all the focus on counteracting and preventing cyber-attacks, it’s easy to lose track of employees who might have physical access to your IIoT cloud. This is where user access privileges, consistent system administration and strong password requirements are helpful.

Creating a Security Model That is Versatile, Flexible and Scalable

It’s also important to develop a security model that is adaptable to future trends and innovations. Hardware regarded as groundbreaking today will be replaced by newer, upgraded versions within the coming years. Likewise, hackers and cybercriminals are always devising new and innovative ways to access vulnerable systems and take advantage of weaknesses before they’re patched.  It’s a never-ending tug of war that requires a lot of diligence on behalf of your IT team because the success of your company might depend on it.

Newtonian vs. Non-Newtonian Liquids

By Megan Ray Nichols

If you’ve seen any viral videos in the last few years, you’re probably familiar with the concept of non-Newtonian fluids — liquids that are fluid when moving slowly but when struck with force, they take on a solid consistency. Videos have gone viral of people filling entire swimming pools with a mixture of water and cornstarch, allowing them to literally run across the surface of the water. What is the difference between a Newtonian fluid and its non-Newtonian counterpart, and where might you encounter these fluids in your daily life?’

Newtonian vs. Non-Newtonian Liquids

First, what is the difference between Newtonian and non-Newtonian fluids?

Newtonian fluids have a constant viscosity that doesn’t change, no matter the pressure being applied to the fluid. This also means they don’t compress.

Non-Newtonian fluids are just the opposite — if enough force is applied to these fluids, their viscosity will change. These fluids are broken up into two categories — dilatants, which get thicker when force is applied, and pseudoplastics, which get thinner under the same circumstances.

These can be further broken down into rheopectic and thixotropic categories. Rheopectics work like dilatants in that they get thicker when force is applied. Thixotropic materials get thinner, like pseudoplastics do. The difference here is that the latter two categories are time dependant. The viscosity doesn’t change immediately but changes slowly over time as more and more force is applied.

Newtonian Fluids in Daily Life

These fancy names might sound like something out of a science fiction novel, but they’re really just the scientific names for things you encounter in your daily life. What Newtonian fluids have you encountered today?

If you took a shower this morning or had a drink, then you’ve already encountered the most common Newtonian fluid — water! Water does not change viscosity no matter how much pressure you put on it — it also cannot be compressed, so the amount of pressure you can put on water as a Newtonian fluid is negligible.

Other common Newtonian fluids include mineral oil, alcohol and gasoline.

Non-Newtonian Fluids in Daily Life

For this section, we’re going to break it down into the four categories of non-Newtonian liquid that we listed above.

Dilatants are probably the most well known nonnewtonian fluids. They become thick or almost solid when force is applied to them and are made up of water mixed with other materials. Oobleck, the colloquial name for a mixture of water and cornstarch, is probably the most well-known, but quicksand and silly putty also fall into this category.

Pseudoplastics might not sound very appetizing, but you probably have a bottle of one in your fridge right now. That’s right — ketchup is a non-Newtonian fluid. The fact that the viscosity changes as each new ingredient is added to the mix makes it tricky to mix ketchup on a large scale.

Now we get into the weird non-Newtonian fluids.

Rheopectic fluids get thicker in relation to the pressure being applied to them and the time that the pressure is being applied. The best example of a rheopectic fluid is cream. With enough time and pressure, cream becomes butter.

Thixotropic fluids are similar to pseudoplastics in that they get thinner as pressure is applied to them, but it’s also dependant on the time that the pressure is being applied. Things like cosmetics, asphalt and glue all fall into the thixotropic category.

It might seem like this is useless information, but it can actually be very useful, especially if you’re ever in a restaurant that still uses glass ketchup bottles. Simply remember that ketchup is a non-Newtonian pseudoplastic and will get thinner as more force is applied to it. Give that bottle a couple of good thumps, and you’ll be in French fry heaven.

Sources:

https://www.youtube.com/watch?v=RIUEZ3AhrVE

https://blog.craneengineering.net/what-are-newtonian-and-non-newtonian-fluids

https://www.philamixers.com/news/how-condiments-are-made/

Spider Eyes are Nature’s Marvels

Now I do not exactly remember where and how I started my journey down this rabbit hole. But the deeper I went the more interesting it became. It was a great learning experience. I’m clearly not an expert. Here I share the understanding I developed of the spider eye over the few hours of exploration. For this I referred to various sources all of which are mentioned in the links. And if you know more or would like to add something interesting to the article please let me know in the comments below.

The  first thing about spider eyes is that 99% of spiders have 8 eyes. A little less than 1% of them have 6 eyes. In some fringe species there are 4, 2 or no eyes at all. Apparently, based on the pattern these eyes are arranged in, on their cephalothorax (let us mortals call it the ‘head’ to make things simple), the family to which the spider belongs can be determined. Some blessed human, made the following schematic to help us do exactly that. In case you ever feel the need to do so, here it is:

And in much greater detail, right here.

For their small size and the limited number of photocells, spider eyes, especially the jumping spider’s (Salticids) eyes perform surprisingly well. Their resolution is better compared with larger mammals than with insects. In the human world a camera of such standards this would simply be an engineering miracle. You will understand why I say that soon…

In the image above if you locate the family Salticidae, you will see those two large eye in the front which are particularly very interesting. These are called the principal eyes (or anterior median eyes) and are the ones that allow high resolution vision. So much that the spider would be able to resolve two spots on a screen 20 cm away from the spider, sitting just 0.12 mm apart from each other. An acuity of about ten times that of a dragonfly – 0.04°.

The brain of this spider, show in blue in the image below is pretty big for its size. The proportion of the volume of brain to body is more or less similar to that of human beings. The brain of Salticids also have a rather large region dedicated for visual processing.

The principal eyes we are talking about are in the shape of elongated tubes as seen below, in the front of which is a hard lens and at the other end is a layer of photocells. Inside the tube, near the retina is another little lens which moves back and forth along the tube like a telephoto lens system. These elongated tubes are like the tubes of a binocular which allow for a higher resolution using a small package.

However the downside of such a tube like architecture is that it limits the field of vision. Here’s how that problem is dealt with.

The front part, with the big corneal lens is fixed. It has a long fixed focal length. The farther end where the retina is located, is connected to these muscles shown in red. These muscles allow for the tube’s farther end to move around in several degrees of freedom to make quick movements and scan a larger image in its head, one small field of view at a time.

In the video below you can see the retinal end of the black tubes moving around inside the translucent exoskeleton of the spider as the spider forms a high resolution complete image of its surroundings, one small field of view at a time.

If you peer deep into their eyes you will see a dark (black) when you are looking into the small retina. However when the farther end of the tube moves, you see a honey brown color with spots. This is the inner wall of the tube that you are seeing in the following video.

Then the retina itself is another biological marvel. Unlike our single layered retina, the Salticid’s retina is made up of four layers. The four layers are arranged one behind the other. This lets the nature pack more photocells in a smaller area and also helps the spider see in color as different colors (different wavelengths) with different refractive indices are focused in different planes.

Counting from the rear end, the spider uses different layers of retina to obtain different colors of the image. The retina’s layer 1 and 2 to get the green color (~580 nm – 520 nm wavelengths), blue color using the layer 3 (~480 – 500 nm wavelengths) and layer 4 for ultraviolet (~360 nm).

An important detail in the above image reveals how spiders manage to keep focus on different objects at different depths, in focus. The layer one has photocells arranged in a step fashion, with varying distance from the lens which makes sure that all objects are focused on at least one part of the layer 1.

The other problem of distance estimation which matters a lot for jumping spiders is again solved rather elegantly by the same apparatus. Humans use their stereo vision – two eyes which are far apart to estimate distance. Other animals move heads to do the same but I’m not getting into that.

Jumping spiders employ a completely different algorithm, utilizing degree of blur cues. For which the second layer plays a crucial role. The second layer would have received a sharp blue image, but they are not sensitive to blue light like I mentioned above. The green they detect is rather blurred at that plane. It turns out that the amount of blur depends on the distance of the object and helps the spider determine the depth by processing the amount of blur in the image. Hence allowing it to jump and hunt accurately.

If you are a university student with free access to journals, I think a quick look at the paper titled: “‘Eight-legged cats’ and how they see – a review of recent research on jumping spiders,” will help you delve into greater detail.

Psst: Someone has it uploaded on research gate for free access for I don’t know how long: here.

Please leave a comment below to let me know your thoughts on this, or if you have any ideas for future posts. I plan to reward the top commentators every month so do not forget to say something.

Humans Are to Blame for These Environmental Disasters

Humans have changed the environment drastically, especially in the last century. As our population has grown, so has our effect on our natural world. Much of that impact has, unfortunately, been negative.

Since our population has begun booming, we’ve made gradual changes to the environment — as well as caused some large, environmental disasters that have caused acute harm both to the environment and human health.

An environmental disaster is an event caused by human activity that’s damaging to the environment. This differentiates it from a natural disaster, which occurs due to natural processes.

Our planet and humankind have seen many environmental disasters in the recent past, but a few stand out as especially costly in terms of money, environmental damage and human health impacts. Here are five of the most catastrophic.

  1. The Dust Bowl

The dust bowl, which occurred in the 1930s in the Southern Plains of the United States, is a well-known environmental disaster. Drought, coupled with rapidly expanding poor agricultural practices, caused dust storms that ripped away the fertile soil of the semi-arid region and created “black blizzards” that reached heights of up to 10,000 feet in the air.

The event made the region virtually uninhabitable and worsened the economic difficulties of the Great Depression. It also inspired lawmakers to pass bills promoting responsible farming practices. It was years before rain finally returned to the region, eventually restoring the plains.

  1. Chernobyl

The Chernobyl disaster is infamous as the most devastating event involving a nuclear power plant in the planet’s history. In 1986, one of the reactors at Chernobyl in Ukraine exploded, spewing huge amounts of radiation into the air.

The explosion itself killed two workers, and more died in the hours following the event. Twenty-eight workers died in the next four months, as did many emergency responders. The radiation may have caused an increase in instances of thyroid cancer in the region.

The radiation also killed all the trees in the area, and the site is still largely off-limits due to fears about the impacts of lingering radiation.

  1. Exxon Valdez Oil Spill

When oil spills from a tanker, pipeline or other source, it can harm wildlife and ecosystems and contaminate groundwater and soil, as well as impact human health. The destruction of plant life associated with oil spills can increase erosion by as much as four times the normal amount.

One of the most infamous oil spills occurred in 1989 in Alaska’s Prince William Sound. An oil tanker, called Exxon Valdez, hit a reef that tore open the hull and allowed 11 million gallons of crude oil to spill into the water. The leak killed an estimated 250,000 seabirds, 2,800 otters and 300 harbor seals. You can still find oil under beaches near the location of the accident.

  1. London Smog

Smog is a common occurrence in cities around the world, but in 1952 in London, it reached unheard-of levels of severity. For five days, a heavy fog merged with sulfurous fumes from coal fires, power plants and vehicle exhaust.

The incident killed around 12,000 people, hospitalized 150,000 and killed thousands of animals. To this day, it remains one of the largest air pollution events in history. It led to the eventual creation of the UK’s Clean Air Act of 1956, which limited the use of coal in cities.

  1. The Bhopal Disaster

Industry makes our modern life possible, but also comes with environmental risks. In 1984 in Bhopal, India, the worst industrial disaster of all time killed approximately 25,000 people.

On Dec. 2, a chemical plant began leaking a deadly gas known as methyl isocyanate (MIC). Safety systems were not functioning properly, so 27 tons of the gas spread throughout the city.

Many thousands of people died within the next few days of respiratory failure, cardiac arrest and other health problems. The disaster also killed many animals and plants in the area and contaminated the groundwater. Toxic elements still remain at the site today due to improper cleanup.

These environmental disasters had a devastating impact on their local environments, animals and people, and may have also contributed to global issues. As we move forward, we must strive to learn more about our natural world and do our best to protect it.

7 Health Benefits of Chocolate — Backed by Science!

If you like chocolate, you’ll be happy to know that in moderation, this tasty snack is actually good for you. No, this isn’t us just trying to find a way to justify our chocolate habit — there’s some actual science here! Whether you like the occasional Snickers bar or just can’t get enough of dark chocolate, here are some of the science-backed health benefits of cocoa.

First, a Disclaimer

Don’t rush out to the grocery store just yet. It’s important to know what type of chocolate to look for. Some types of chocolate have different health benefits, while others might not have any benefit at all.

First, make sure your chocolate is real instead of what is known in the industry as compound chocolate, which uses cocoa powder for chocolate flavoring but has no cocoa butter in the product. It’s easier for some manufactures — cocoa butter can be difficult to work with in large batches — but it isn’t good chocolate. If your ingredients show other forms of fat, like vegetable oil or soybean oil instead of cocoa butter, skip the candy bar.

Now, on with the show!

1. Chocolate Helps Your Heart

Chocolate can be a great tool to help you mend a broken heart, but it can also help keep your ticker healthy. One study, completed over nine years by Swedish scientists, found that one to two servings of dark chocolate every week helped to reduce the risk of heart failure in adults.

It wasn’t Hershey bars these individuals were eating, though — milk chocolate is so heavily processed that it doesn’t contain the kind of beneficial components dark chocolate does. Dark chocolate contains flavonoids, which help to protect the heart when eaten in moderation. These are the same antioxidants that are found in things like red wine, onions and tea.

2. High-Quality Tasty Nutrition

Believe it or not, a bar of high-quality dark chocolate can help you get a good portion of your daily recommended value for minerals like iron, magnesium, copper and manganese. A 100-gram portion also contains 11 grams of fiber.

Now, you don’t want to eat 100 grams regularly — that equals about 3.5 ounces, or 600 calories worth of chocolate — but even a small portion offers a host of nutritional benefits.

3. Candy Helps You Lose Weight

This might sound like we’re making stuff up, but it’s true: Dark chocolate in moderation can help aid weight loss. This is due to the fact that it is more filling than milk chocolate — due in part to that higher fiber content we mentioned a moment ago — and it also helps to lessen your craving for other sweet, fatty or salty foods that could make it harder to stick to your diet.

4. Keep That Cholesterol in Check

One of the main components in chocolate, cocoa butter, is a fat — and we’ve been told for years to avoid fat because it can be detrimental to our cholesterol levels. As it turns out, though, dark chocolate can help to both raise HDL — the good cholesterol — and lower total LDL in men with already elevated cholesterol.

The flavanoids in dark chocolate also help to prevent LDL from oxidizing. When bad cholesterol reacts with free radicals, it becomes oxidized and starts damaging tissues.

5. Pack It With Your Sunscreen

This is a benefit that only appears after you eat chocolate for a while — 12 weeks, minimum, according to researchers — but eating dark chocolate regularly can help to protect your skin from sun damage. This is no replacement for sunscreen, but regular chocolate consumption can more than double your minimal erythema dose or MED. This is just a fancy term for the amount of sun exposure it takes before you start to get sunburned.

The flavanoids in dark chocolate help to improve blood flow to the skin. They can also help increase skin density and overall hydration. Don’t skip your sunscreen, though — this might be an added level of protection, but it won’t keep you from getting sunburned during a day at the beach.

6. Not Just Good For The Body

In addition to helping with skin and heart health and cholesterol, chocolate has also shown signs of being good for your brain. Studies have shown that chocolate can help improve blood flow to the brain, which can help improve brain function. This benefit has been primarily studied in young adults. It has also been shown to help improve cognitive function in elderly patients who suffer from cognitive impairments.

7. Candy to Prevent Diabetes — Not as Crazy as It Sounds

People with diabetes are generally told to avoid candy and other sugars, but dark chocolate could actually be the key to help diabetic patients regulate their symptoms or prevent diabetes from developing at all. A small study out of Italy found that patients who ate dark chocolate every day for 15 days displayed reduced insulin sensitivity.

The amount of chocolate that was consumed during the trial equaled about 480 calories, so it’s important to consider the amount of chocolate you’re eating. However, if it can help reduce insulin sensitivity, it might be worth it to add a square or two of dark chocolate to your diet.

Chocolate isn’t as bad for you as your dentist or doctor might be telling you — eating high-quality dark chocolate in moderation can be a great way to improve your health over time. Just make sure you’re eating your chocolate in addition to a healthy diet.

How This Hurricane Season Is Affecting U.S. Oil

Just when it seemed U.S. oil couldn’t be stopped, hurricane season 2017 arrived to rain on parade. The National Oceanic and Atmospheric Association predicts 2017 will bring 11 to 17 tropical storms and up to four massive hurricanes between June and December.

For offshore oil platforms, signs of a nasty season mean it’s time to baton down the proverbial hatches. When weather forecasters predict conditions like this seasons, all non-essential personnel are evacuated from platforms to ensure their safety. While this is a drill that rig operators have been through in the past, every storm is different, and you can’t be too prepared for the chaos a hurricane can bring.

Preparing for the Storm

Making an offshore rig ready to sustain a hurricane is a delicate balancing act between protecting the employees who work on the platform and safeguarding as much oil production as possible.

The very real impacts that hurricane season can have on production make every last operating hour crucial, so personnel essential to rig operation are allowed to remain aboard until a few days before the storm. Sometimes it can be less than a day, but well-trained crews know how to stay professional even under pressure because failure could mean a natural disaster.

Within a few day of the storms arrival, drilling stops and all personnel are evacuated. Drill ships that are in the potential path of the storm are relocated to safe waters. The unpredictable nature of storms makes it necessary to stop operations even outside of the direct path of the hurricane.

Technology is the biggest asset oil manufacturers have in the fight against storms like Harvey, Irma and Jose. Modern oil rigs are equipped with GPS systems that allow supervisory staff to monitor their positions during and after the storm and locate them should the rig be pulled away from its drilling location by storm surges.

Restarting Operations

This year’s flurry of storms poses a grave threat to America’s prominent position in the global oil market because of its impact on multiple critical areas for US oil production. Hurricane Harvey struck Texas’ gulf coast, which is home to 45 percent of American refining capacity.

Add to that the offshore operations in the Gulf, which account for 17 percent of crude oil production, and now the rigs struck by Irma and Jose, and you have the makings of a disaster.

Once again, technology will be essential in restoring production capacity as quickly as possible. Many offshore rigs are designed around lean manufacturing principles. Assuming they can endure the winds and waves, that should help get oil production on its feet as quickly as possible.

Lean manufacturing practices focus on reducing waste in the form of motion, downtime, over-processing and four other potential inefficiencies. By allowing an oil rig to continue producing up to days before a storm hits, and restart operations with minimal crew, these practices can help recover days of production time.

Assessing Damage

No amount of preparation can guarantee that sensitive equipment won’t be damaged in the course of a storm, which is why drilling companies practice special flyover and assessment procedures to determine if offshore sites are safe to send personnel back to following a massive storm.

Remobilization, or “re-mob” as it’s called, is the process of gathering all company assets and ensuring they’re safe to continue work before beginning drilling operations again. Following an assessment by helicopter, small teams are dispatched to rigs and ships to determine if everything is in working order.

The ability to track every single asset using GPS makes the process of finding ships and platforms simpler than it was in the past, but the real challenge comes in repairing damaged equipment after a storm. It can take days or weeks to repair complex extraction equipment with crews sometimes working round-the-clock to get a significant drilling facility back online.

Ultimately, the small teams can bring rigs and ships back online. Once operational, assets can begin receiving more personnel. It’s a race against the clock every time, and this year it looks like those assessment crews are going to get more than their fair share of practice.

Frozen Tissue Array Methodology, Applications and Benefits

Frozen tissue array is a methodology that is used in modern molecular and clinical research to analyze hundreds of tumor samples on a single slide. It allows a high throughput analysis of proteins and genes in a huge unit. It consists of frozen tissues where separate tissue cores are lumped together to allow simultaneous histological analysis. It has made it easy to streamline several research projects thus saving significant time. It also conserves precious reagents for analysis numerous slides that contain a single section per slide. It is an ideal screening tool that is used before

embarking on extensive research and analysis.

Preparation of frozen tissue array

Each product is produced using the state-of-the-art preparation technique by the use of the finest quality specimens. Upon excision, the tissues are then placed in liquid nitrogen and then sorted meticulously by an expert pathologist. Cores from 20 different tissues or more or with pathologically relevant tumors are then combined in a single block. With the use of unique staining methods, the quality of each
slide is selected. Tissues with a diameter of 2 mm from the region of interest
are sorted from frozen tissue OCT blocks by varying their freezing temperatures, see more here.

Features of frozen tissue array

Every product is designed to conform to the FDA guidelines and must meet the requirements of therapeutic antibody validation and vitro diagnostic device certification. There is a vast range of tissues in every array. The technique is suitable for both radioactive and non-radioactive detection. It combines arrays from variety human donors. Compared to paraffin-embedded tissues, frozen array tissue contains better antigen exposure.

Frozen Tissue array applications

The technique has been employed in various areas such:

  • Rapid screening of protein expression or novel gene against a large panel of tissues
  • Diagnostic and high throughput therapeutic analysis in antibody
    variations
  • Analysis of gene expression patterns
  • In situ hybridization and used together with immunohistochemistry
  • Novel gene and protein expression comparison
  • It is also an excellent approach in FISH-based experiments in the
    analysis of DNA. In summary, frozen tissue array provides an excellent target
    material for an effective study of RNA, DNA, and proteins.

Samples of DNA, RNA, and certain antibodies don’t perform optimally when used in pre-fixed paraffin-embedded tissues. However, they work pretty well when used in frozen tissue array. Again, the procedures that require fixation can be identified and conducted in an appropriate manner. This means it is possible for you to include a wide array of samples in your final analysis than when using the paraffin-embedded
procedure
The only drawback with frozen tissue array is that some cell morphology and tissue architecture distortion is likely to occur. This can be seen by comparing it with the sections from paraffin-embedded. Additionally, a limited number of samples can be embedded in one array. This is due to the fact that there may be a tendency of OCT compound cracking or bending particularly when samples are placed one millimeter apart.

Conclusion 

The invention of this technique has become a boon to many scientists from around the world. It has saved scientists and pathologists significant time when conducting several tests. It also has numerous potential applications in basic research,
prognostic oncology, and drug discovery.

Determining methods of Automated Nucleic Acid Extraction

By Lorenzo Gutierrez

Scientific exploration- Determining methods of Automated
Nucleic Acid Extraction

The human body is a complex structure made up of various cells and genes. The central system of genetic identification for humans is focused on one’s DNA, that is deoxyribonucleic acid. It is present in nearly all living organisms as it defined as the main constituent of chromosomes. With the introduction of a variety of communicable diseases, it is pertinent to researches to be able to extract DNA. They do this to run various tests to see how best the world’s population can extend its life cycle through science.

What is Automated Nucleic Acid Extraction?
This speaks to the removal of DNA by mechanical/ automated means. Extraction by this mean is deemed to be more accurate and more beneficial to science as it lessens the margin of error, or so it is alleged. “Automated nucleic acid extraction systems can improve workflow and decrease variability in the clinical laboratory.”[1]There are various methods that can be accessed. As science evolves so does technology and technological research is by extension advanced.

Methods of Automated Nucleic Acid Extraction
There are various methods of extraction and various machinery used by researchers on a day to day basis in efforts to attain much needed samples of DNA. This is done as the fight towards cures for many communicable diseases is a rather tedious process. Let us face the fact that technology is put in place to lighten the work load of many and aid in movements towards more accurate results. Many companies have delved into the creation of different extractors that each operate at varied levels. Some of which were created to be work horses, thereby being able to complete massive amounts of work while others are able to only produce an average turn out. Laboratories vary by size and as such, they would be able to best choose an extractor of their liking to perform their work functions.

There is the manual means of extractions, you can refer to this as good old reliable. Researchers are incredibly consumed by work when they have to utilize manual extraction methodologies as it is incredibly hands on. Of course, there is the usage of some level of technology however, the researcher would need to be present to adjust variables and incorporate other items as the need arises.
Automated Extractors allow researchers the ability to set their research in the machines and be able to leave to complete other tasks. Researchers aren’t needed at every step during automated extractions as technology does most of the work once it is that the samples are prepared and placed therein. It must be noted that with the presence of great technology, companies also incur a greater cost. Where a manual extraction could be performed at approximately $5, the work of an automated extractor could range anywhere from $7.60 to $12.95 per sample.
You may find that, true to human nature, researchers will gravitate towards a more established extractor as it had been around longer and there had been numerous reports done on it. However, it is important to still venture out and try new machinery as prior to the one that is most renowned became that way it was merely extractor X for argument sake, an unknown machine with the potential to create an ease of workload.

Research of two methods [2]
For the purposes of this article we will look at a particular research performed by a group of research scientists, their information will be provided below. After comparing the three methods of extraction, It could be determined that the first extractor; X was reasonably efficient as it varied from 86% to 107% of manual. The second extractor Y’s recovery efficiency in comparison to the manual method varied from 83%-107%. Though the results varied marginally the true variation of extraction came by way of cost. As the extractor X was the most costly means at $12.95 per sample, whilst the Y costed $7.60. There is also a key difference in operational actions as the X doesn’t allow for the researcher to walk away, leaving the machine to perform its extraction. The X also needs a higher volume of samples to perform its task. Automated Nucleic Extraction is a field of science that is beneficial to researchers as it yields greater results than manual extraction. It is however a more costly approach.

[1] Dundas N., Leos N.K., Mitui M., Revell P., Rogers B.B. (2008 June 13) Comparison of automated
nucleic acid extraction methods with manual extraction.

[2] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2438199/
Retrieved August 3, 2017

The Science Behind Beer Kegs

BY MEGAN RAY NICHOLS

Beer kegs have been serving as the centerpiece of college parties and the backbone of many bars and taverns for decades. Typically available in a half or quarter barrel, the average keg can fill approximately 124 or 62 pints of beer, respectively. While it’s relatively simple to transport, store and use a keg around the home, there are some precautions to remember.

Typical Components of a Beer Keg

Despite the availability of different sizes, shapes and alternate materials, kegs are pretty standard around the world. As such, several components are found on nearly every keg.

  • The keg itself is typically made of stainless steel. While quarter barrels contain 7.75 gallons of liquid, the larger half barrel boasts 15.5. Smaller kegs, which are sometimes available, contain 5 gallons.
  • A coupler, sometimes referred to as a pump, is needed to withdraw beer out of the keg via the topmost valve.
  • Gas, either in the form of carbon dioxide or nitrogen, is used to help the beer flow smoother and quicker. The coupler or pump is often used, especially at parties, although it’s not as effective as gas.
  • Tubing is also required to transport the beer from the keg and into your cup. Commonly made of polyethylene or vinyl, some partygoers chill the tube for additional coldness.
  • If you’d rather forego the manual-powered party pump, your other option is to outfit your keg with a faucet. This ensures consistency between beer pours, which can help keep your party going all night long.

Continue reading The Science Behind Beer Kegs

Million Dollar Space Pens or Pencils

If six years ago you had forgotten a Fisher space pen in your car’s glove box and you pull it out today, it will write without a hiccup. It will also write underwater, in extreme heat and in freezing cold. In fact it will write in space too. It has been used for exactly that for decades.

You must have heard of that story where NASA spent millions to invent a pen that writes in space. That is not really true. The millions in research was Paul Fisher’s own money that he spent to develop a pen which would write in weightless conditions. Well, NASA was spending money on it at almost the same time too. But their research program’s budget spiraled out of control and had to deal with public pressure before going back to using pencils.

There’s a good chance you must have received an email like this one, maybe around April 15th:

When NASA started sending astronauts into space, they quickly Discovered that ball-point pens would not work in zero Gravity. To combat this problem, NASA scientists spent a Decade and $12 billion developing a pen that writes in zero
Gravity, upside-down, on almost any surface including glass And at temperatures ranging from below freezing to over 300 C.

The Russians used a pencil.

Your taxes are due again — enjoy paying them.

Snopes

The Russian one line solution compared to the “$12 Billion” dollar Americans used sounds like a smooth story to tell. But that is not really how it all went down.

At the height of space race, both Americans and Russians used pencils to write in space. But since pencils use graphite to leave a mark, and graphite is flammable, it made pencils not the best things to take into space, especially after the Apollo 1 fire incident. Secondly, graphite conducts electricity pretty well. That means a broke piece of pencil tip, or even the small amount of graphite dust from it could get into the electronics and cause shorts. And then there’s paper, wood and eraser which go with a pencil. All of which produce particles when used and are combustible.

Mechanical pencils were a better solution as they eliminated wood but the graphite was still a problem. Grease pencils or wax pencils solved it to some extent. But again the mark left by any pencil was not as reliable as a pen. Ballpoint pens worked pretty well. However the problem with normal ball pens was that the ink was not designed to work well at low pressures, nor would it do very well in extreme space temperatures. Felt tip pens again used a much thinner ink which wasn’t an ideal choice for usage in low pressure environments like space.

Fisher solved all of these problems by inventing a pen that used an ink cartridge that was pressurized at 35 psi. This ensured the ink would come out irrespective of the orientation of the pen, or the pressure it was in. It also used a non-newtonian  thixotropic ink which acted like ketchup – stayed put as long as the pen was not intending to write, and flowed due to a change in viscosity when the pen had to write. Oh and the ink was designed to work well at -25 to 120 degrees C, not 300 C.

This original spacepen – Antigravity 7 or AG7, the one which was used on Apollo 7 space mission in 1968 after 2 years of testing by NASA, sells on Fisher spacepen’s website for about $60.

This video talks about how it all started from a sandwich:

[Wikipedia], [Physics.org], [Fischer spacepen], [Snopes]

DIY Decoration: Make Chalkboard Globe

Chalkboard paint is great for labelling and decorating the items inside the home. Here, you will learn to make chalkboard globe. You can put this globe on your office table or study table. This unique chalkboard globe will not only enhance the grandeur of any table but also add the splendid look to that. This DIY project requires following supplies to get completed:-

  1. Globe.
  2. Rust-Oleum Chalkboard Spray Paint.
  3. Old Newspaper.
  4. Painter’s Tape.
  5. Chalk.
  6. Duster.

Find the Place to Work

Just search the best place to work. Well-ventilated area like porch would be the best place to do the job. Once the area is discovered to work, cover that with old newspaper in order to keep the floor clean.

Continue reading DIY Decoration: Make Chalkboard Globe

You Need to Know About Driverless Technology Changing the Automotive Industry

BY MEGAN RAY NICHOLS

Even though they’re not on the roads yet, driverless cars are disrupting the automotive industry in unforgettable ways. Keep reading to learn about expected factors related to the evolution of these futuristic vehicles.

1. Market Trends

Automotive analysts say emerging technology is gaining momentum in the automotive market. Top car brands are making their vehicles compatible with popular gadgets and tech-related services, and some people think the Internet of Things (IoT) will also play a role in upcoming models.

Business leader Elon Musk has even announced he plans to earn income by lending extremely safe self-driving cars to interested persons.

2. Potential Reductions in Car Ownership

Musk’s idea doesn’t seem so far-fetched when you consider most of us are accustomed to carpooling at least occasionally. If you’re from a city where the service is available, you may have even used car-sharing companies that allow you to drive a vehicle on an as-needed basis, then drop it off in a pre-determined spot when you’re done.

Once driverless cars become more mainstream, we may increasingly use borrowed vehicles rather than owning cars. That’s especially true because self-driving cars will be too expensive for some people to own.

Continue reading You Need to Know About Driverless Technology Changing the Automotive Industry

The Menace of American toads in Queensland Australia

Recently I stumbled upon an unusual documentary from the 80s about the giant American toads (Bufo Marinus) of Queensland, Australia. That’s correct. Who would have thought that this 50 minute movie (embedded at the end of this article) with songs of the toad’s praise would turn out to be one of those surprisingly informative and strangely funny movies.

Well, it was certainly fun to watch. Here’s a gist of all interesting things I got from it and some reading which ensued.

Cane toads were never native to northern Australia before the 1930s. Raquel Dexter an entomologist, during the 1932 world conference of sugar technology in Puerto Rico, suggested that the cane toad was the ultimate solution to deal with a native Australian cane beetle. This beetle had decimated the output of sugarcane crop of North Queensland cane farmers.

So, Mungomery Reginald William brought in 102 cane toads into the freshwaters of little Mulgrave river in Gordonvale from Hawaii to tackle the problem of beetle infestation. Mungomery’s intention to make the toads travel for two weeks from Hawaii to Sydney and for another two days to Gordonvale was a noble yet arduous one:

“We have got these bloody grubs by the balls this time and we will go on to bigger and brighter things”

A jubilant Irishman

Continue reading The Menace of American toads in Queensland Australia

How to Help Nature Recover from a Wildfire

BY MEGAN RAY NICHOLS

Wildfires are often associated with destruction. It wipes out homes, wildlife habitat, and of course trees. Areas that are damaged to that extent take time to regrow. Also, all that burning has another side effect. It releases a burst of greenhouse gasses into the air. The smoke and ash from the fire can make it difficult to breathe, especially for those prone to respiratory problems like asthma.

It goes without saying that property damage is an issue with wildfires. As long as people have enough warning, there should be no casualties. However, as climate change continues to make weather increasingly severe, human safety becomes less reliable.

However, one of the main issues comes from the lack of vegetation. After intense wildfires, there is a risk of soil erosion. If the fire is small, it may not be a big deal, but for fires that burn thousands of acres can pose serious hazards.

Continue reading How to Help Nature Recover from a Wildfire