Formulus Black software stores data in persistent memory

Persistent memory is one of the hottest topics in the data storage industry, and startup Formulus Black has launched a new Linux-based software stack designed to use it.

Formulus Black’s ForsaOS can enable enterprises and cloud providers with latency-sensitive workloads, such as database and analytics programs, to keep all of their data in ultrafast server memory and forgo slower peripheral storage devices.

“In our environment, everything runs in the memory channel,” said Wayne Rickard, chief marketing and strategy officer at Formulus Black. “Storage is memory. Memory is storage.”

The design goal of ForsaOS is to remove the need for customers to modify software or pay a premium for applications such as SAP HANA to take advantage of in-memory architecture. ForsaOS abstracts the data plane to enable any workload that can run in a Kernel-based virtual machine to operate entirely in system memory without changes to the application code. Formulus Black modified the Ubuntu Linux kernel to provide a direct path to logical extended memory, which acts as a virtual disk for the VMs.

… more on:

https://searchstorage.techtarget.com/news/252461688/Formulus-Black-software-stores-data-in-persistent-memory?track=NL-1822&ad=926990&src=926990&asrc=EM_NLN_111456063&utm_medium=EM&utm_source=NLN&utm_campaign=20190416_Startup%20puts%20focus%20on%20persistent%20memory%20storage%20trend

html5 – improvments, advantages vs html, flash

HTML5 vs Flash

https://www.keycdn.com/blog/html-vs-html5

What Are the Advantages of HTML5 vs HTML for Web Users?#

Now that we’ve covered the technical side, what are the advantages of HTML5 for regular web surfers? Here are some benefits you may or may not have noticed since developers started using HTML5:

  • Some data can be stored on the user’s device, which means apps can continue working properly without an Internet connection.
  • Web pages can display more fonts with a wider array of colors, shadows, and other effects.
  • Objects on the page can move in response to the user’s cursor movements.
  • Interactive media, such as games, can run in web browsers without the need for extra software or plugins. Audio and video playback also no longer require additional plugins.
  • Browsers can display interactive 3D graphics using the computer’s own graphics processor.

By limiting the need for external plugins, HTML5 allows for faster delivery of more dynamic content.

What Are the Advantages of HTML5 vs HTML for Web Developers?#

A major focus of HTML5 was to give developers more flexibility, which in turn would lead to more engaging user experiences. HTML5 was conceived with several goals in mind:

1. Consistent Error Handling#

All browsers have parsers for handling syntactically or structurally improper HTML code, or “tag soup.” However, until recently, there was no written standard for this process.

Therefore, new browser vendors had to test malformed HTML documents in other browsers so

that they could create an error handling process through reverse-engineering.

Malformed HTML is an unavoidable fact of life; according to Rebuildingtheweb, about 90 percent of webpages are estimated to contain some improper code, so error handling is vital for properly displaying websites. Consequently, codified error handling can save browser developers a lot of time and money.The benefits of a clearly defined parsing algorithm cannot be understated.

2. Support for More Web Application Features#

Another goal of HTML5 was to enable browsers to work as application platforms. As websites became more complex, developers had to find ways to “work around” browser extensions and other server-side technologies. HTML5 gives developers more control over the performance of their websites. Many of the Flash and JS-based hacks commonly used in HTML4 are now elements inherent to the language. These changes also allow for a faster and smoother user experience.

3. Enhanced Element Semantics#

The semantic roles of some existing elements have been improved to make the code more insinuative. New elements like section, header, article and nav can replace most div elements, which makes scanning for mistakes a less painful process.

4. Maximized Mobile Support#

Mobile devices are notorious for giving web developers headaches. Their rapid proliferation over the last decade has made the need for better HTML standards more urgent. Users expect to access web applications from anywhere, anytime on any device, so developers have been forced to meet the demands of the market. Fortunately, HTML5 makes mobile support easier by catering to “low-fueled” devices like smartphones and tablets.

Intel Optane DC Persistent Memory When Used in Memory Mode NOT Persistent

By Jim Handy, Objective Analysis

Handy Objective AnalysisThis article was published on March 30, 2019 by Jim Handy, Objective Analysis.

Intel’s Optane: Two Confusing Modes. Part 2) Memory Mode

This post is the second part of a four part series in The SSD Guy blog to help explain Intel’s two recently-announced modes of accessing its Optane DIMM, formally known as the Intel Optane DC Persistent Memory.

Memory Mode
The most difficult thing to understand about the Intel Optane DC Persistent Memory when used in Memory Mode is that it is not persistent. Go back and read that again, because it didn’t make any sense the first time you read it. It didn’t make any sense the second time either, did it?

Don’t worry. This is not really important. The difficulty stems from Intel’s marketing decision to call Optane DIMMs by the name Intel Optane DC Persistent Memory. Had they simply called them Optane DIMMs like everyone expected them to then there would have been far less confusion. That sentence above would have instead said that Optane DIMMs are not persistent when used in Memory Mode.

Readers would then have said: “Well, OK. But why use Optane, then, if it’s not persistent?”  The answer is very simple: Optane DIMMs are enormous. Consider the fact that Samsung’s largest DRAM DIMM (a very costly one) is 128GB, and Intel’s smallest Optane DIMM is 128GB and should sell for a fraction of the price; this gives you very good reason to use Optane. Everybody wants more memory.

So why in the world is it not persistent? The answer is involved, but relatively simple to understand.

Optane cannot be used as the only memory in a system – it has to be accompanied by DRAM. This is because Optane doesn’t like to communicate with the processor the way that the processor likes to be communicated with.

The module is pretty slow compared to DRAM for three reasons:
The medium, 3D XPoint Memory, writes more slowly than it reads. Some say that a write takes three times as long as a read. If this chip were to communicate with the processor over a standard DDR4 interface then all reads would have to be slowed to the same speed as writes.
Another difficulty is that 3D XPoint Memory wears out, so it has to use wear leveling. That means that address translation must be inserted into the critical timing path, slowing down every access, reads as well as writes.
The third reason is one you probably never thought of: The data must be encrypted before it is stored and decrypted when it is read, further slowing the critical path. Many organizations worry that storage (HDDs, SSDs, and now NVDIMMs, including the Optane DIMM) will fall into the wrong hands making data available to evildoers. Those organizations would not use the Optane DIMM if it did not support data encryption.  (Alert readers will object that this can only be an issue if the Optane DIMM is, in fact, persistent, and they’re right. I’ll explain that shortly.)

The solution is to do what all cell phones do, which is a very similar technique that has been used for decades to manage data between DRAM and HDDs or SSDs. In a cell phone the processor can’t efficiently communicate with the NAND flash, so it moves lines of code and data from the flash into a DRAM and operates on them there. In Intel’s new Memory Mode the processor moves data back and forth between the Optane DIMM and the system’s DRAM, and only executes code or operates on data in the DRAM.

The Optane DIMM is paired with a DRAM that behaves as a cache, and, like a cache, it is invisible to the user. You heard that right – if you use the Optane DIMM in Memory Mode then your DRAM becomes inaccessible.  A typical system might combine a 64GB DRAM DIMM with a 512GB Optane DIMM, but the total memory size will appear to the software as only 512GB. This is the same thing that cache memory does: The size of the cache is not added to the size of the DRAM, it’s simply invisible. In either case the faster medium (the DRAM in this case) temporarily stores data that it copied from the slower medium (the Optane DIMM in this case) and the cache controller manages the data’s placement in a way that makes it appear that the Optane DIMM is as fast as the DRAM.  At least, it appears that way most of the time. In those rare instances where the required data is not already in the DRAM, the data accesses slow down a lot. This is because the processor stops everything and moves data around. If necessary it copies modified DRAM data back into the Optane DIMM, and then it copies the missing data from the Optane DIMM into the DRAM. This usually occurs very rarely (maybe 1-5% of the time, depending on the software that’s being run) so the other 95-99% of the time the system will run at DRAM speeds.  That’s close enough for most people.

If you want a really deep dive into this you can order The Cache Memory Book ($97.75), which dissects all of the principles of caching. I happen to know, because I wrote it.

So let’s talk about persistence. Nothing has been written into persistent memory until it actually reaches the Optane DIMM. The software thinks that it’s writing to the Optane DIMM, but it’s actually writing into the DRAM cache. When there’ a surprise power outage the data in the DRAM cache vanishes. This is why Intel’s Memory Mode is not considered persistent. The data that was in DRAM waiting to be written into the Optane DIMM is lost. If some of the data is persistently stored in the Optane DIMM, but some hasn’t been updated, and if nobody knows what data has missed being written into the Optane DIMM, then all of the data is suspect. The easy answer is to say that none of the data persisted – just start over. That’s what existing DRAM-only systems assume, so it’s not an alien concept. If it’s in memory (DRAM or Optane) and the power is lost, then the data is lost as well.

So it’s not considered persistent.

I have been told that the processor memory controller could have been designed to flush all of the ‘Dirty’ (new) data in the DRAM cache back into the Optane DIMM when power fails, thus making it fully persistent, but since the whole point of Memory Mode is to make existing software see a giant memory space without any modification, this was considered unnecessary.

Anyone who wants to take advantage of the Optane Memory’s persistence will need to use it a different way, and that’s the subject of the next post. This mode is called App Direct Mode, and it not only supports persistence, but it also allows the user to access the DRAM as DRAM (without hiding it) and the Optane Memory as Persistent Memory, so a system with 64GB of DRAM and 512GB of Optane Memory will appear to have 64+512=576GB of some kind of memory.

Just to complicate things, the memories (DRAM and Optane) don’t both have to be entirely dedicated to either Memory Mode or App Direct Mode. The software can determine just how much of either memory type will operate in Memory Mode and how much will work in App Direct Mode.

Is that confusing enough? I suspect that very few programs will manage the memory in both modes. At least, not for a very long time.

But at least you now understand that sentence at the top of this post: That Intel Optane DC Persistent Memory when used in Memory Mode is not persistent!

A I said in 2015 when I published the industry’s first 3D XPoint forecast, Memory Mode should account for the bulk of Optane’s early sales, because it can be used with existing software ith no modification whatsoever.

Later on, when software that uses App Direct Mode becomes available that should change, but this will take a number of years.

Our Comments

After criticisms when Intel Optane was revealed (in March 2017), there was some criticisms, especially on the high price.

But finally a lot of storage companies like it and here is a partial list of those, including big ones, who decide publicly to adopt the technology:

  1. Akitio
  2. Apeiron
  3. Cisco
  4. Datera
  5. Dell EMC
  6. Gigabyte
  7. HP
  8. IBM
  9. Lenovo
  10. MemVerge
  11. NetApp
  12. Redis Enterprises
  13. Supermicro
  14. Suse
  15. Tyan

Vast Data Emerges From Stealth-Mode With $80 Million in Funding

VAST Data, Inc. announced its new storage architecture intended on breaking decades of tradeoffs to eliminate infrastructure complexity and application bottlenecks. VAST’s exabyte-scale Universal Storage system is built entirely from high-performance flash media and features several innovations that result in a total cost of acquisition that is equivalent to HDD based archive systems.

Enterprises can now consolidate applications onto a single tier of storage that meets the performance needs of the most demanding workloads, is scalable enough to manage all of a customer’s data and is affordable enough that it eliminates the need for storage tiering and archiving.

As part of the launch, the start-up announced it has raised $80 million of funding in two rounds, backed by Norwest Venture PartnersTPG GrowthDell Technologies Capital83 North (formerly Greylock IL) and Goldman Sachs.

The announcement of this funding comes on the heels of VAST completing its first quarter of operation where it has experienced historic customer adoption and product sales. Since releasing the product for availability in November of 2018, its bookings have outpaced the fastest growing enterprise technology companies.

Storage has always been complicated. Organizations for decades have been dealing with a complex pyramid of technologies that force some tradeoff between performance and capacity,” said Renen Hallak, founder and CEO. “VAST Data was founded to break this and many other long-standing tradeoffs. By applying new thinking to many of the toughest problems, we are working to simplify how customers store and access vast reserves of data in real time, leading to insights that were not possible before.”

Birth of Universal Storage
The young company invented a new type of storage architecture to exploit technologies such as NVMe over Fabrics, Storage Class Memory (SCM) and low-cost QLC flash, that weren’t available until 2018. The result is an exabyte-scale, all-NVMe flash, disaggregated shared-everything (DASE) architecture that breaks from the idea that scalable storage needs to be built as shared-nothing clusters. This architecture enables global algorithms that deliver game-changing levels of storage efficiency and system resilience.

Some of the significant breakthroughs of VAST’s Universal Storage platform include:
• Exabyte-Scale, 100% Persistent Global Namespace: Each server has access to all of the media in the cluster, eliminating the need for expensive DRAM-based acceleration or HDD tiering, ensuring that every read and write is serviced by fast NVMe media. Servers are loosely coupled in the VAST architecture and can scale to near-infinite numbers because they don’t need to coordinate I/O with each other. They are also not encumbered by any cluster cross-talk that is often challenging to shared-nothing architectures. The servers can be containerized and embedded into application servers to bring NVMe over Fabrics performance to every host.
• Global QLC Flash Translation: The VAST DASE architecture is optimized for the unique and challenging way that new low-cost, low-endurance QLC media must be written to. By employing new application-aware data placement methods in conjunction with a large SCM write buffer, Universal Storage can extract unnaturally high levels of longevity from low-endurance QLC flash and enable low-cost flash systems to be deployed for over 10 years.
• Global Data Protection: New Global Erasure Codes have broken an age-old tradeoff between the cost of data protection and a system’s resilience. With company’s work on data protection algorithms, storage gets more resilient as clusters grow while data protection overhead is as low as just 2% (compared to 33 to 66% for traditional systems)
• Similarity-Based Data Reduction: The vendor has invented a new form of data reduction that is both global and byte-granular. The system discovers and exploits patterns of data similarity across a global namespace at a level of granularity that is 4,000 to 128,000 times smaller than today’s deduplication approaches. The net result is a system that realizes efficiency advantages on unstructured data, structured data and backup data without compromising the access speeds that customers expect from all-NVMe flash technology.

Key Benefits
There are three ways that customers can deploy Universal Storage platform: a turnkey server and storage cluster appliance, storage plus VAST container software that runs on customer machines, or software only.

Whatever the deployment model, customers enjoy benefits from VAST’s synthesis of storage innovations, including:
• Flash Performance at HDD Cost: All applications, from AI to backup, can be served by flash, increasing performance without increasing the cost of capacity
• Massive Scalability: Customers no longer need to move and manage their data across a complicated collection of storage systems. Everything can be available from a single ‘source of truth’ in real-time. Universal Storage is easier to manage and administer, and becomes more reliable and efficient as it scales.
• New Insights: With this increased flexibility and scalability, there are new opportunities to analyze and achieve insights from vast reserves of data.
• Data Center in a Rack: Customers can now house dozens of petabytes in a single rack, providing reductions in the amount of floor space, power and cooling needed.
• 10-Year Investment Protection: With 10-year endurance warranty, customers can now deploy QLC flash with peace of mind. DASE architecture enables better investment amortization than legacy HDD architectures which need to be replaced every three to five years, while also eliminating the need to perform complex data migrations.

Austin Che, founder, Ginkgo Bioworks, said: “Ginkgo Bioworks designs custom microbes for a variety of industries using our automated biological foundry. Our mission to make biology easier to engineer is enabled by VAST Data making storage easy. Our output is exponentially increasing along with decreasing unit costs so we are always looking for new technologies that enable us to increase output and reduce cost. VAST Data provides Ginkgo the potential to ride the declining cost curve of flash while also providing near-infinite scale.”

Yogesh Khanna, SVP and CTO, General Dynamics Information Technology (GDIT), said: “Our work with VAST Data provides an opportunity for General Dynamics customers to utilize the vision of an all-flash data center with deep analytics for large quantities of data. GDIT is already delivering multi-petabyte VAST Universal Storage systems to customers who are eager to move beyond the HDD era and accelerate access to their data.”

Eyal Toledano, CTO, Zebra Medical Vision Ltd., said: “Zebra is transforming patient care and radiology with the power of AI. To achieve our mission, our GPU infrastructure needs high-speed accelerated file access to shared storage that is faster than what traditional scale out file systems can deliver. That said – we’re also a fast-growing company and we don’t have the resources to become HPC storage technicians. VAST provides Zebra a solution to all of our A.I. storage challenges by delivering performance superior to what is possible with traditional NAS while also providing a simple, scalable appliance that requires no effort to deploy and manage.”

Our Comments

This announcement marks a real event in the storage industry as Vast Data, based in NYC, NY, with operations and a support center in San Jose, CA and R&D in Israel, shakes the market, established positions and many strong beliefs existing for a few decades.

The main idea and the trigger of the project was how to drive cost down with a real new approach to storage and here I don’t mention primary or secondary. How to beat HDD, remove them completely and avoid complex data management such tiering and still deliver performance ?

If you offer a flash farm at a price of HDD one, you don’t need HDD at all and tiering is useless and guess what, you can put all data in only one tier, this new flash tier. Why you need to consider HDD any more? And the dream comes true with Vast Data proving it’s possible and real at the data center level. According to Jeff Denworth, VP product, we speak about $0.30/GB for average cases and pennies per gigabyte for backup when data are very redundant.

We used to say and it’s still true that HDD associated with data reduction (deduplication and compression) can beat tape for secondary storage moving tape potentially to deep archive. Here a pretty similar approach is made but for demanding environments where IO/s are critical to sustain real business applications. So this result is possible thanks to storage class memory like Intel 3D XPoint, QLC flash, end to end NVMe, unique algorithms for data reduction, protection and management and a new internal file system. And it was not possible before these three first elements exist even with fantastic data-oriented algorithms.

We’re speaking here about universal both in term of components but also in term of usages. We used to distinct primary and secondary storage based on their role in the enterprise. To be clear, primary storage supports the business, I mean business critical applications run and use this storage, and any downtime impacts the business, it’s a must have. Secondary storage protects and supports IT not the business, potentially this level is optional if the primary has everything in it, secondary is a nice to have. You lose it or you don’t have it, the business on the primary is running and not impacted. With Vast Data, there is no such consideration as all data – production and copy data – can reside in only one tier, all flash at the HDD price.

A very important design choice is the total absence of state at the compute node level as it represents a real difficulty to deliver high performance for users’ IO/s with consistency challenges across tons of nodes. The development team has chosen an any-to-any architecture, named DASE for DisAggregated Shared Everything, meaning that any node can speak super fast to any flash entity in every storage chassis thanks to NVMe over Fabrics. The normal cache/memory layer presented in the server and limited to this chassis move down to a share pool accessible to every server avoiding consistency challenges and some performance penalties. Vast Data demonstrates that caching and stateful model are the enemy of scalability.

The other key development is related to QLC operation mode as the engineering team wished to fully control how flash behave to maximize the endurance of cells respecting the financial initial cost goal. A new data placement approach was invented thanks to the SCM layer that helps build and organize large write stripes to QLCs. The main idea behind this was to eliminate garbage collection, read-modify-write operations and write amplification.

Metadata store is also a key point as Vast Data doesn’t rely on any commercial or open source databases even some well designed with distributed philosophy in mind. In fact, the team has built its own model shared across storage nodes as nothing again exists and resides on compute nodes. They invented what they name a V-Tree structure with seven layers and very large to cover the data model.

Data protection and data reduction, the term chosen by Vast Data as it is more than deduplication, are realized when IO operations are acknowledged, in other words when data are written to the SCM layer i.e on 3D XPoint elements. Compute nodes send one copy of the same data to three storage nodes and then the global erasure coding operates with very low overhead, around 2% with model like 500 + 10. For the reduction aspect, the logic is global and works at the byte level. Both approaches are Vast Data IP.

Finally, even if lots of things within boxes represent new innovation, applications continue to consume storage via classic interfaces. The Vast Data farm is dedicated to unstructured content so it exposes industry standards file sharing protocols such NFS, here in v3 – we expect SMB in the near future – and a de-facto object interface with a S3 compatible API.

We’ll see how the market will react to this announcement representing a real breakthrough for the storage industry with real engineering efforts, developments and innovations.

The $80 million financial funding here announced were received in three rounds:

  • 2016: $15 million in series A
  • 2018: $25 million in series A
  • 2019: $40 million in series B

Executives have deep backgroung in AFA’s companies including XtremIO, Kaminario and Pure storage.

Vast HaliakRenen Hallak, CEO and co-founder: Prior to founding the start-up, led the architecture and development of an AFA at XtremIO (sold to EMC), from inception to over a billion dollars in revenue while acting as VP R&D and leading a team of over 200 engineers; earlier developed a content distribution system at Intercast, from inception to initial deployment and acted as chief architect; was also a member of the CTO team at Time to Know; published thesis in the journal of Computational Complexity and presented at the Theory of Cryptography Conference.

Shachar Fienblit, VP R&D and co-founder, coming from Kaminario (7 years ending as CTO) and IBM

Jeff Denworth, VP products and co-founder, was at CTERADDN (for seven years), Cluster File Systems and Dataram.

Mike Wing, president: formerly during 12 years at Dell EMC ending as SVP, primary storage and previously 3 at EMC

Avery Pham, VP operations: worked more then 5 years at Pure storage, DDN, EMC and Cisco.

Source : Storage Newsletter

https://www.storagenewsletter.com/2019/03/01/vast-data-emerges-from-stealth-mode-with-80-million-in-funding/

Why Dell EMC May Move into Storage Acquisition Mode

Following a five-year absence, Dell has returned to the public markets, undergoing a complex reverse IPO with VMware just before Christmas. It’s a development that could have enormous implications for enterprise data storage. Newly flush with public capital, Dell has a powerful tool with which to make acquisitions, and the company now has plenty of incentive to do so.

Dell still has a lot of product consolidation work to complete left over from its $67 billion acquisition of EMC in 2016, and the macro trends around how the enterprise manages and stores data are starting to drive organizations away from on-premises storage equipment to hybrid cloud-based storage services. Dell can always rely on VMware to help the company compete in cloud computing, but Dell will need to be able to provide customers with on-demand, OpEx-based data storage and management solutions.

Acquisition is the obvious way to fill these gaps, and Dell’s re-entry into the M&A market could drive up valuations and accelerate exits for young storage companies. In this eWEEK Data Points article, Laz Vekiarides, a former Dell and EqualLogic executive and current CTO of ClearSky Data, offers readers six points about the potential impact Dell’s newly public status could have on enterprise data storage and management.

Data Point No. 1: Dell is still the 800-lb. gorilla of enterprise storage.

……

More: http://voip.eweek.com/storage/why-dell-emc-may-move-into-storage-acquisition-mode?utm_source=dlvr.it&utm_medium=twitter

Fujitsu Drives Data Center Transformation with Hyper-Converged Infrastructure for SAP HANA®

 

Fujitsu EMEIA

Munich, January 18, 2019 – Fujitsu today announces new options for enabling businesses to run mission-critical SAP HANA® workloads on a hyper-converged infrastructure. These options will allow businesses to manage and scale advanced, fully-virtualized SAP solution environments
The Fujitsu Integrated System PRIMEFLEX for VMware vSAN unlocks the benefits of decoupling compute, storage and networking capabilities from underlying hardware. This means businesses can enjoy cloud-like economies by virtualizing entire data centers running all workloads – including SAP applications and other business applications running on SAP HANA– on a common, software-defined pool of resources.

As a result, high-performance applications can be scaled according to demand, with the benefit of a unified management of resources. Optimizing on-premises resources delivers cost savings and powers the performance of data-rich applications. This enables enterprises to plan for the flexible growth of SAP solution landscapes in line with the demands of the digital era – while the hyper-converged approach builds a bridge to future cloud integration.

Fujitsu PRIMEFLEX for VMware vSAN is an end-to-end infrastructure solution built on best-in class components, including high-performance, four-socket Fujitsu PRIMERGY x86 servers, which have consistently excelled in benchmark tests, and the market-leading VMware virtualization software vSAN. The solution supports four-socket servers, and is fully optimized for SAP HANA and applications that leverage the speed of in-memory databases, such as real-time data analytics.

Fujitsu PRIMEFLEX for VMware vSAN comes with a range of management options. Simplified data center operation via the integrated Fujitsu ServerView server management suite allows advanced monitoring and management of all critical hardware components. Fujitsu Software Infrastructure Manager gives organizations centralized control over entire datacenters, while Fujitsu Software Enterprise Service Catalog Manager is a stepping stone for customers to turn their virtual infrastructures into hybrid cloud environments. A comprehensive package of consulting, integration and support services from Fujitsu further streamlines the implementation and operation of Fujitsu PRIMEFLEX for VMware vSAN.

More : http://www.fujitsu.com/fts/about/resources/news/press-releases/2019/fujitsu-drives-data-center-transformation-with-hyper.html

Five Young Companies Making An Impact On The World To Watch In 2019

It’s been a great year for new startups and 2019 promises to be no less exciting, with new technologies to adopt, frontiers to explore, and environmental and sustainability challenges to overcome, testing entrepreneurs’ innovative capabilities to the max.

Here are five companies already promising to make an impact on the world that are ones to watch in 2019 and beyond.

Bee Vectoring Technologies

Canadian tech firm Bee Vectoring Technologies (BVT) has developed a commercial alternative to pesticide spraying of food crops using bumblebees to distribute a naturally occurring, organic, inoculating fungus during their natural foraging cycle, makes it one to watch.

The BVT system, which has been in R&D for several years, has commercially reared bumblebees walk through a specialist tray dispenser of organic, inoculating powder before exiting their hive and dropping spores on each plant they visit. The powder contains a naturally occurring fungus, clonostachys rosea. When absorbed by a plant it enables it to effectively block destructive diseases such as botrytis in strawberries.

The process is 100% natural, harmless to bees, animals and humans, and reduces the need for chemical pesticide spraying. In recent large-scale commercial demonstrations on strawberries in Florida, not only did the BVT system deliver comparable or improved disease protection over sprayed chemicals, it also delivered fruit yield increases of between 7% and 29%. In a recent trial on blueberries in Nova Scotia, yield increases were recorded at 77%.

More: https://www.forbes.com/sites/alisoncoleman/2018/12/30/five-young-companies-making-an-impact-on-the-world-to-watch-in-2019/#51388f7c1ac4

How Japan Is Harnessing IoT Technology To Support Its Aging Population

If you’re a diabetes patient in Japan, the gods may be watching over you. The country is struggling with rising numbers of diabetics, but a novel approach to managing Type 2 diabetes mixes a bit of psychology with Internet of Things (IoT) devices, as well as seven lucky gods from Japanese folklore. It’s part of a broader trend in which Japan is deploying cutting-edge technology to grapple with an aging population.

more …

https://www.forbes.com/sites/japan/2018/12/04/how-japan-is-harnessing-iot-technology-to-support-its-aging-population/#2cc1e6593589