Making sense of Big Data

In the modern knowledge economy all entities, to a degree, are collecting big data. Most pundits also agree that big data is defined by its inherent characteristics - that of volume, velocity, variety and veracity. The issue is not so much the collection of all this data, but rather making sense of it, and applying insights derived from it to gain opportunity and value. It is within this “sensemaking” process that the secret lies.  Software on its own will never be a panacea to solving your big data and decision-making support issues. You need humans in-the-loop to provide the necessary context and interpretation to make sense of this data. This includes knowledge of how models work and how they need to be applied.

Furthermore, Hadoop as a software framework is automatically assumed to be the answer to everything associated with big data. Nothing is more removed from the truth. There are literally hundreds of applications in the market place focussed on solving problems in the big data space - and herein lies the rub - what software products and data analytics strategies are specifically suited to your own budget and organizational requirements? In many instances open source products are fit for purpose and have been proven and battle tested in some of the most well-known companies around the globe.

Don’t get caught up in the current big data hype and settle with proprietary distributions of open source software products before considering all available options.
Why Visual Analytics?

The average Human Brain can assimilate a maximum of between 3 and 7 different concepts at any one time. Analysts, no matter what their background or discipline, struggle to assimilate large volumes of data from multiple sources, to logically sort it into categories to be able to manipulate, interpret and understand the causal relationships within the information so as to be in a position to act on it intelligently.

Visual Analytics combines human and automated analysis techniques with interactive visualizations for an effective understanding, reasoning and decision making on the basis of very large and complex datasets.
KNIME is an advanced open source analytics platform and is managed under the GNU General Public Licence Version 3.  It is currently positioned in the Leader Quadrant for Gartner’s 2016 Magic Quadrant for Advanced Analytics Platforms. The platform supports more than 1000 modules, and more are added to it regularly. It has connectors for all the major file formats and supports major data types ranging from spreadsheets, XML, JSON to images and major document types. It supports native and in-database data blending and transformation, predictive analytics, machine learning and visualization. Use cases for Knime include:
Churn Analysis & Prediction
Text & Network Mining
Social Media Sentiment Analysis
Credit Scoring
Energy Usage Prediction
Fraud Analysis and Outlier Prediction
Many more
KNIME is an important component in our IoT Thing stack as it provides the neccessary capability of advanced analysis of sensor based data. The machine learning algorithms in particular add additional value to our sensor based solutions.
Want more information?

Please click on this icon to download more information regarding our Big Data analytics solutions
Copyright © Suritec Geospatial 2016
Industry Portfolio
Geospatial Data Framework
DarkMatter Sensor Data Service
Big Data Analytics
IoT System Design & Architecture
Geospatial Visualisation and Analysis
Home
Products & Services
Defence & Intelligence
About
Smart Agriculture
Smart Industry
Big Data Storage
Environmental
Risk & Disaster Management
Contact
Maritime
Privacy Policy Terms of use
Use of this site consitutes acceptance of our Disclaimer and Privacy Policy