Quantcast
Channel: BLOG DAYA CIPTA MANDIRI GROUP
Viewing all 2832 articles
Browse latest View live

ManageEngine 2018 Predictions: Virtualization and Cloud

$
0
0
VMblog Predictions 2018
Industry executives and experts share their predictions for 2018.  Read them in this 10th annual VMblog.com series exclusive.
Contributed by Arun Balachandran, Senior Product Analyst, ManageEngine

Virtualization and Cloud 

Hybrid computing and containerization to continue their upward trend

  • Hybrid computing may become the most popular form of cloud usage
Hybrid cloud adoption has been steadily gaining momentum over the last few years despite facing a few hurdles along the way. In most enterprises, CIOs and IT managers have more flexibility and control over which applications should go to the cloud and which should remain in a data center. To get better visibility, accountability and control over their hybrid cloud environments, enterprises should track application performance end to end - irrespective of where the application (or parts of the application) is running. 
  • Containers continue to gain importance
The rise of cloud computing has also given rise to the problem of public cloud vendor lock-in. To counter this problem, some CIOs are looking at multi-vendor strategies and containers for portability. If an enterprise is likely to move essential applications and processing from cloud to cloud or platform to platform, then containers hold the most potential. All major cloud providers now support containers.  

IOT and artificial intelligence

The growth of IOT computing environments has given rise to massive amounts of data generation which will require multiple storages. Most of the data will be perishable and can be discarded after analysis. The major test for organizations is to visualize and find insights from different types of data (e.g., structured, unstructured, images, contextual, dark data and real-time) and in the context of their applications. The use of artificial intelligence technologies such as deep learning will play a major role in big data analytics engines that help derive insights from massive streams of data.
From the IT management point of view, IT admins will need to stay on top of the performance of IOT devices and applications, big data repositories, and dynamic cloud environments to resolve performance and availability issues instantly.
##
About the Author
Arun Balachandran
Arun Balachandran is a senior product analyst at ManageEngine, the real-time IT management company, and currently works for ManageEngine's application performance management solution. For more information, please visit buzz.manageengine.com/; follow the company blog at blogs.manageengine.com/ and on LinkedIn at www.linkedin.com/company/manageengine-, Facebook at www.facebook.com/ManageEngine and Twitter @ManageEngine.  

Perbandingan Public, Private, and Hybrid Cloud Computing

$
0
0

Perbandingan Public, Private, and Hybrid Cloud Computing


Cloud Computing terdiri dari 3 form: Public CloudPrivate Cloud, and Hybrids Cloud. Tergantung pada jenis data yang dibutuhkan, Kita akan membandingkan public cloud, private cloud, and hybrids cloud dari segi tingkat keamanan dan management required.

image(source:www.dummies.com)
Public Cloud
Public Cloud adalah layanan  infrastruktur yang disediakan off-site melalui Internet. Public Cloud ini menawarkan level yang tinggi dari efisiensi share resources. Namun, mereka juga lebih rentan dari pada private Cloud. Public Cloud adalah pilihan yang tepat dengan kondisi:
  • Beban kerja standar untuk aplikasi yang digunakan oleh banyak orang, seperti e -mail.
  • Anda perlu untuk menguji dan mengembangkan kode aplikasi.
  • Anda memiliki SaaS ( Software as a Service ) aplikasi dari vendor yang memiliki strategi keamanan yang diterapkan.
  • Anda perlu kapasitas tambahan (kemampuan untuk menambahkan kapasitas komputer untuk puncak kali).
  • Melakukan proyek kolaborasi.
  • Anda melakukan sebuah proyek pengembangan perangkat lunak ad- hoc menggunakan platform sebagai layanan ( PaaS ) yang menawarkan cloud.
Banyak eksekutif departemen TI mengkhawatirkan keamanan Public Cloud dan kehandalannya. Luangkan waktu untuk memastikan bahwa Anda memiliki perencanaan yang baik untuk masalah keamanan, atau penghematan biaya jangka pendek, kareng jika tidak hal ini bisa berubah menjadi mimpi buruk jangka – panjang.
Private Cloud
Private Cloud adalah salah satu layanan di mana infrastruktur diselenggarakan di private network. Private Cloud ini menawarkan tingkat keamanan dan kontrol yang tinggi, tapi perusahaan harus tetap membeli dan memelihara software dan infrastuktur, yang mana hal itu bisa mengurangi biaya. Sebuah Private Cloud adalah pilihan yang tepat dengan kondisi:
  • Data dan aplikasi adalah bisnis Anda. Oleh karena itu , kontrol dan keamanan adalah hal yang terpenting.
  • Bisnis Anda adalah bagian dari sebuah industri yang harus sesuai dengan masalah keamanan dan privasi data yang ketat.
  • Perusahaan Anda cukup besar untuk menjalankan data center  cloud generasi selanjutnya  secara efisien dan efektif sendiri.
Hal yang cukup rumit adalah, garis batas antara private and public Cloud yang masih belum jelas /kabur. Sebagai contoh, beberapa perusahaan public cloud kini menawarkan versi  privat cloud mereka. Beberapa perusahaan yang hanya menawarkan teknologi private cloud kini menawarkan versi public dari mereka dengan capabilities yang sama.
Hybrid Cloud
Hybrid Cloud mencakup berbagai pilihan public and private dengan beberapa penyedia. Dengan menshare hal-hal di atas Hybrid Cloud, Anda menjaga setiap aspek di bisnis Anda dalam lingkungan yang seifisien mungkin. The downsideadalah bahwa Anda harus mentrack berbagai platform security yang berbeda dan memastikan bahwa semua aspek dari bisnis Anda dapat terhubung satu sama lain. Berikut adalah beberapa situasi di mana lingkungan hybrid adalah pilihan yang terbaik.
  • Perusahaan Anda ingin menggunakan aplikasi SaaS tetapi mengkhawatirkan keamanan. Vendor SaaS Anda dapat membuat Privat cloud hanya untuk perusahaan Anda di dalam firewall mereka. Mereka menyediakan  virtual private network ( VPN ) untuk keamanan tambahan.
  • Perusahaan Anda menawarkan layanan yang disesuaikan untuk pasar vertikal yang berbeda. Anda dapat menggunakan Public Cloud untuk berinteraksi dengan klien tetapi menyimpan data mereka dijamin dalam private cloud.
Persyaratan manajemen Cloud Computing menjadi jauh lebih kompleks bila Anda perlu mengelola private, public, and traditional data centers bersama-sama. Anda harus menambahkan kemampuan untuk penyatuan environments ini.
Source:
disadur dari: www.dummies.com/DummiesArticle/Comparing-Public-Private-and-Hybrid-Cloud-Computing-Options.id

5 Strategi Efektif Hyperconvergence (Gartner)

$
0
0

Five Keys to Creating an Effective Hyperconvergence Strategy

 FOUNDATIONAL Refreshed: 06 February 2017 | Published: 29 October 2015 ID: G00292684
Analyst(s):
 

Summary

The true value proposition in hyperconverged systems is often missed in evaluations because of excessive hype among vendors. Here's a framework that I&O leaders can use that cuts through the hype.

Overview

Key Findings

  • IT leaders are inundated by what hyperconvergence means and the potential benefits, often by hype from the vendors.
  • The integrated system — contrary to its identification with simplicity — actually puts a heavy burden of complexity on IT leaders who make strategic infrastructure decisions for IT corporate services, operations and development.
  • Hyperconvergence expands the variety of choices available to IT leaders, but may add complexity and confusion about what claims of simplicity and flexibility mean to you specifically.
  • Five key attributes can enable planners to cut through the hype and make more-effective hyperconvergence decisions.

Recommendations

  • Create a compelling strategic hyperconvergence evaluation composed of the following five key decision attributes: simplicity, flexibility, selectivity, prescriptive and economic.
  • Parse vendors' claims spread across the five decision determinants by validating their application, use and benefits to your strategic infrastructure objectives.
  • Define, weigh and rank each of these factors according to your needs, and their importance to projects, use cases, in-house technical expertise, budget and objectives.
  • Combine these five key attributes with other technical evaluation criteria on performance, scaling, resilience, security and availability.
  • Select the best supplier finalists by proofs of concept that deliver on both your performance objectives and the five determinants most important to your IT and business needs.

Analysis

Prioritize and Define Five Key Hyperconvergence Determinants by IT-Business Objectives

We sifted through hundreds of Web pages, presentations, briefings, notes and other materials from vendors, consultants and clients over the past two to three years. One reason for this effort was that, increasingly in this crowded market teeming with new entrants and claims of superiority, several attributes repeatedly kept appearing. Systems were almost always declared as simple and easy to use and deploy; highly flexible for a wide range of tasks; offering multiple choices of software and hardware partners; economical; and as having prescriptive qualities and specifications that always maximized performance, utilization, availability and other benefits. We also observed that many of our clients were confused about how best to make a decision that could have lasting positive (or negative) effects in their data centers, depending on the wisdom of their selection. So we developed a framework of five key decision determinants that can be used in hyperconvergence integrated system (HCIS) RFPs to arrive at the most appropriate selection: simplicity, flexibility, selectivity, prescriptive and economic (see Figure 1).
Figure 1. Key Determinants in a Hyperconverged Integrated System Decision
Research image courtesy of Gartner, Inc.
Source: Gartner (October 2015)
Here are examples of the cadence in the literature we found on two of the most-oft cited HCIS benefits: simplicity and flexibility. Simplicity: (1) combines all the infrastructure below the hypervisor, eliminating the need for about a dozen discrete infrastructure and software products; (2) simplifies and streamlines common workflows, eliminating the need for disparate management solutions; and (3) pools and allocates software-defined and physical resources through a single, user-friendly interface. Flexibility: (1) provides the flexibility to pool commodity local hard-disk drive (HDD) storage with RAM and/or flash across multiple server farms; (2) features pay-as-you-grow pricing that offers more flexibility to scale the environment as needs grow; and (3) enables the same systems to act as backup/disaster recovery targets and restore workloads when needed.
Indeed, many more beneficial attributes of these two categories could be added, such as low click provisioning; centrally managed remote distributed sites; fast setup, install and provisioning; scale-out and up; bimodal agility; architecturally adaptable to broad use cases; and so on. When decision time comes, how important are these determinants in the decision process? Perhaps an articulate vendor, or a strong communicating channel, may deliver a potent message of one or a few of these as strengths. Or they may intermingle them among the many other technical minutiae that they are anxious to convey, such as input/output operations per second (IOPS), latency and response times, snapshots, deduplication, tiered storage, etc. Of course, the latter are also important. So we suggest that IT leaders and planners compose a hierarchy of priorities among these determinants as a complementary analysis to the technical dimension. Every IT organization should have its own version of what simplicity, flexibility, selectivity, prescriptive and economic will mean to their organizations in particular (e.g., how they may impact: agility, head count, service catalog offerings, environmental footprint, etc.).

Why the Five Keys Play an Important Complementary Role to Technical Evaluations

Hyperconverged infrastructures potentially represent an important new milestone in delivering lean and agile infrastructures. Gartner calls such systems Mode 2-type platforms for the fast and agile digital business world (see "Kick-Start Bimodal IT by Launching Mode 2" ). HCIS is still several years from commonplace Mode 2 deployments; as such, infrastructures must effectively be fiercely adaptable to managing the rapidly changing and evolving competitive and consumer market. Such environmental forces demand not only technical speed, but also require elastic resource pools, intelligent fabric infrastructures, hypervisors, container-based and open-source ecosystems, quick deployments and retirement, various application templates, automation and orchestration, and hybrid cloud potential. Such Mode 2 systems must satisfy the paradigm of develop, deploy, fail/change often, recover and rejuvenate. Such modus operandi will not be self-evident in pure Mode 1 static and scalar metric evaluations alone, as with most of today's infrastructure. The five keys should help to flesh out the more subtle qualities in the offerings. The difficulty for planners, architects, business management and CIOs is understanding what the vendors associate and imply with their products as simple, flexible, selective, prescriptive and economical, and relating them to your own business needs. In this research, we will provide some of the correlates of the keys, with continuing research on identifying best practices that enable qualitative and quantitative analysis in evaluating and positioning the numerous HCIS, which are now marketed by virtually all system vendors in conjunction with channel and software partners. Here are descriptions to start the evaluation process.

Simplicity

To be simple is not merely to be configured simply, or to be operationally simple. Simple suggests a full life cycle, including upgrades to components for technological advantage (e.g., power consumption); transparent software management enhancements; automated diagnosis and repair; one-stop maintenance; click-and-run provisioning; resource pool fluidity; automated file system management, sharding, tiering and storage reclamation; under-the-hood performance and reliability logical views; etc.

Flexibility

To be flexible often implies commodity parts or SKUs for various use cases and space requirements, or to scale to accommodate various use cases. However, flexibility can have both technical and business connotations. Many users are averse to breaking down existing walls or silos for yet another silo. Systems with high degrees of flexibility should be able to "blend" with existing infrastructure and applications or previous-generation systems through interoperability, offloading and tiering. They may also assume chameleon properties as applications change. They may be able to linearly scale independently by assigned roles by nodes for compute, storage, security and networking.

Selectivity

To be selective extends beyond being flexible, with product, module, rack or node choices; software automation and management; hypervisor selection; centralized and distributed IT services management, etc. Of high importance is whether a system supplier presents a locked-down appliance with a fixed menu of options, or enables key partnerships with innovative hardware and software vendors who agree to integrate, test and validate their solutions on the main supplier's platform. Selectivity as a characteristic may even conflict with simplicity, requiring the IT-business planning and review committee to make trade-offs as part of short-term and longer-term goals. If IT hardware skills exist but orchestration management has been weak, the bias could be shifted toward a strong hardware/software partnership, where these two disciplines are delivered transparently; save wasteful hours of development, test, run and revise time; and increase useful life.

Prescriptive

The prescriptive approach leans heavily on meticulous component selection, integration and tuning, at both hardware and system software levels, complemented by rich, functional software that abstracts and manages components to generate maximum system utilization. The key is achieving predictable performance and availability, with an ability to handle almost anything you can throw at it, as a result of carefully engineered design. The vendor will bet its business model's success on performance as its distinguishing trademark. The IT organization, in turn, will accept the prescribed configuration as long as it runs its applications at predictable service levels with high capacity and utilization for IT and business user needs. These systems may somewhat compromise flexibility in order to deliver the higher priority of performance and predictable behavior. Alternatively, they may point toward a non-HCIS integrated and converged solution.

Economic

"Economic" is a term that should be defined by planners as well as financial and procurement managers. Some IT organizations or procurement departments seek to optimize capital expenditure, while others seek to shift cost burdens to operating expenditure (e.g., cloud). An HCIS decision can focus on the potential total cost of ownership and operational cost savings of appliances, with relatively limited scalability. Until these systems mature, they may not approach nor emulate the scalability and mission-critical attributes of converged infrastructure systems. A cost analysis and comparison with existing infrastructure is always recommended but, in most cases, is very difficult to execute. Most organizations lack a stable base of comparison, factor in the migration or modernization costs, and may also claim that their engineering prowess already exists to create nearly the same equivalence as the packaged systems. We have heard the latter argument often enough to estimate that vendors of all types of integrated systems may, in reality, only have an addressable market of 50% of the total system market through the next three years. Alternatively, those planning "greenfields" and new data center locations are motivated by jump-starting agile and simpler infrastructure to manage and maintain at lower costs.
It's important to note that you need not restrict yourself to these five categories exclusively. You may find a subset as satisfying your evaluation needs, or you may wish to add to them (for example, an argument might be made to include agility). We prefer it be subsumed under simplicity or flexibility, or both. Simplicity, for example, may deliver features that contribute to the increased agility of the system. What are those features? As a separate category, you may want the supplier to articulate the precise features that deliver increased levels of agility for Mode 2 operations. Having a separate category can make the deliverables more compelling and clear.
Advice: Ensure that vendors explain in depth their application of the five key determinants in their solutions to add precision and depth to your decision on how well they match your needs.

Conclusions: Rewrite the Narrative

IT and business leaders who will be responsible for important service delivery must team together and articulate their individual perspectives, derived from the important evaluation attributes of the five key determinants. By laying on the table issues such as:
  • What are the pain points that slow our response times down?
  • Why are we failing so often and taking so long to recover?
  • Why does it take so much time to set up configurations each time a new application is presented to us?
  • Why do we have to be "plumbers" and burrow into the nuts and bolts of the system to find why and where performance slowed?
  • Why do our RFPs fail to deliver what we anticipated?
  • Why are we engaged so often with vendors denying their responsibility for outages or degraded operations?
The five key determinants are designed to break the spell of faster, less expensive refresh cycles over and over again by rewriting the narrative. When it comes time for a refresh, IT planners, engineers, architects and business leaders should design a new narrative. The new actors in this narrative will don different clothing from the standard "double the performance at 30% lower cost." Now, the search should uncover real need-driven value where the devil will be in the details.

Evidence

Some of the principles in this research were tested in a Gartner Research Circle live chat forum. The Gartner Research Circle is a managed panel of IT and business leaders. A screener questionnaire to examine current positions on hyperconvergence was sent to members in North America on 2 September 2015. Ninety-one members responded, and 12 members went on to participate in a moderated live chat on 17 September. All live chat participants were familiar with the term in the early stages of discussion or evaluation. Research was developed describing the results in detail.
The five determinants were developed in research conducted over a three-month period into virtually all vendors' communications, Web-based product descriptions, vendor briefing documents and presentations, and in-person discussions. In addition, numerous client interactions revealed interest factors and motivations in investigating integrated systems.

4 Reasons Why Log Management is Key to CyberSecurity

$
0
0

The Blame Game: Identifying The Culprit During Security Incident Response

After a serious IT security incident is discovered, the priority is to shut it down and recover quickly in a cost-effective manner. However, management will want to find the root of the problem so that they have a place to point the finger, but this is often easier said than done.
Security incidents require a time and labor-intensive investigation to uncover cybercrime techniques and sift through massive amounts of data. Incidents that involve a privileged account prove to be even more challenging as authorized insiders or external hackers who have hijacked credentials can modify or delete logs to cover their tracks.
Sophisticated and well-funded cyber criminals often target privileged accounts because they hold the keys to the kingdom, allowing criminals to steal data on a massive scale, disrupt critical infrastructure and install malware. Under the guise of privileged users, attackers can lurk within systems for months, gaining more and more information and escalating their privileges before they are even discovered.
In addition to deliberate attacks, human error is also a factor to consider during an investigation. For example, an inexperienced administrator may have accidentally misconfigured a core firewall, turning a quick resolution into an overwhelming investigation. IT staff members often use shared accounts such as “administrator” or “root”, making it extremely difficult to determine exactly who did what. With this degree of uncertainty, it is easy to start the blame game between parties.
One way to simultaneously combat the threat of external hackers and human error is to collect relevant and reliable data on privileged user sessions. This allows investigators to easily reconstruct user sessions and can reduce both the time and cost of investigations.
In addition to user session monitoring and management, having an incident management process in place will be critical to ensure quick and effective identification of a threat source.
The Incident Management Process
To identify an incident and respond quickly, organizations need to develop a multi-step management process that they can consistently rely on. For starters, the NIST and the CERT/CC has outlined a step by step process for incident management by ISO 27002. These encourage a consistent approach, especially for those organizations under strict compliance regulations. Businesses are expected to regularly define, and in the case of a security event, execute an incident response procedure. They must establish that they are capable of taking action when critical assets are endangered.
The CERT/CC concept has four components. First, an incident is reported or otherwise detected (detection component). Second, the incident is assessed, categorized, prioritized and is queued for action (triage component). Thirdly, they must conduct research on the incident to determine what has occurred and who is affected (analysis component). Finally, specific actions are taken to resolve the incident (incident response component). Essentially, organizations need to find a process like this that they can implement and reference in the case of a security breach.
Identifying and Acquiring Data Sources
Deep investigations require organizations to first identify and then collect the data in question. This is the first step in any forensic process. Data sources may include security logs, operations logs and remote access logs that have been created on servers. They can also span client machines, operating systems, databases, and network and security devices. Investigations that involve privileged accounts could also include session recordings, or playable audit trails that can be critical in uncovering what has happened.
Once the data is in sight, the analyst must then acquire it. Some log management tools will centrally collect, filter, normalize and store log data from a wide range of sources to simplify the process. For cases involving privilege misuse, data must also be collected from privileged session recordings.
With all the data in hand, it must then be verified to ensure its integrity. This might include protecting against tampering through the use of encrypted, time-stamped and digitally signed data.
Examination and Analysis 
During an investigation, each piece of data must be closely examined in order to extract relevant information. By combining log data with session recording metadata, the examination of privileged account incidents can be expedited dramatically.
Once the most critical information has been extracted, the analysis process begins. Through machine learning, organizations can analyze privileged user behavior and detect when behavior falls outside their normal operating parameters. When combined with replayable audit trails showing logins, commands, windows or text entered from any session, this can provide a full picture of the suspicious activity. With all of these elements, analysts can create a full timeline of events for the reporting phase.
Reporting and Resolution
Once all of the data is analyzed, the laborious reporting process can begin. Rapid investigations and the ability to make quick, informed decisions can be challenging and require real-time data about the context of a suspicious event. In these scenarios, access to risk-based scoring of alerts, quick search and easily interpreted evidence can expedite the process.
In today’s fast-moving threat landscape, organizations must have capabilities in place to secure critical assets by managing and monitoring privileged accounts and access. Alongside a robust incident management process, businesses can be prepared for when an incident occurs, and with access to the right data, along with the ability to easily sort through it, they will be empowered to quickly uncover the source of the incident and future-proof systems.
Csaba Krasznay, Security Evangelist at Balabit

Log management plays a serious role in identifying IT security incidents.  Whether you are attacked by a sophisticated cyber criminal or experience a breach due to human error, it is crucial that you get to the heart of the problem quickly and efficiently.  
Luckily, Nagios Log Server makes it easy to interpret, graph, store and manage your system log data so you can easily investigate and correct the problem.  Download the fully-functional trial here.


source: http://www.informationsecuritybuzz.com/articles/blame-game-identifying-culprit-security-incident-response/

Happy birthday PRTG

$
0
0
Happy birthday PRTG
From our member team at PT Daya Cipta Mandiri Solusi
We are your Gold Partner since 2009
We installed PRTG in many companies, and we proud of it


Awingu - solusi akses data dan aplikasi Rumah Sakit

$
0
0

Awingu merupakan solusi yang baik untuk mengakses aplikasi dan data di rumah sakit dengan cara mudah, hanya dengan menggunakan HTML5 Browser.

Awingu bukan merupakan Virtual Desktop Infrastructure, melainkan cara baru mengakses aplikasi dan data yang tersedia di kantor / pusat data dengan cara mudah.

Berikut salah satu customer pengguna Awingu dari Hospital Industry


Kontak kami untuk mencoba dan presentasi terkait Awingu.
email: askme@dayaciptamandiri.com
HP: 08121057533

Solusi Akses Aplikasi dan Data dengan Awingu

$
0
0
Salah satu tantangan yang saya temui belakangan ini adalah banyaknya permintaan dari customer untuk tetap bisa mengakses aplikasi serta data files mereka secara nyaman dengan berbagai perangkat yang user miliki, tanpa batasan tempat.
Ya, isu mobilitas telah menjadi faktor utama, terutama bagi yang tinggal dan beraktifitas di Jakarta dan sekitarnya. Kehidupan metropolis yang sibuk, jam kerja kantor yang lebih panjang, serta kemacetan yang terus terjadi membuat banyak perusahaan mencari solusi untuk bisa memaksimalkan potensi dan kinerja karyawannya dari mana saja.
Tetapi beragam solusi yang ada sekarang terkait dengan Bring Your Own Device semua miliki kendala , pertama, umumnya perangkat dimiliki oleh karyawan, sehingga tidak mau dan mudah dipasang dan dikontrol oleh aplikasi Mobile Device Management yang ada. Dan perangkat yang mereka miliki sangat beragam type dan merek, sehingga solusi MDM dan BYOD yang ada menjadi sangat kompleks dan mahal.
Kedua, perusahaan tetap kuatir apabila karyawan bekerja secara remote, semua file akan tercopy, dan mudah disharing ke pihak lain, isu keamanan akses file serta file kerja sangat menjadi hal penting. Semua solusi yang ada harus memiliki informasi audit yang jelas.
Ketiga, banyak perusahaan masih menggunakan aplikasi yang lama. Mereka ragu untuk merubah semua aplikasinya secara langsung hanya karena menginginkan karyawan mengakses aplikasi dari luar lokasi kantor. Waktu yang diperlukan untuk merubah aplikasi menjadi aplikasi yang bisa diakses, baik berbasis web ataupun aplikasi native menjadi kendala.
Semua ini dijawab dengan Awingu. Awingu menyediakan aplikasi platform yang melakukan akses secara aman kepada aplikasi dan data yang ada, sehingga dapat diakses oleh seluruh karyawan hanya dengan menggunakan browser berbasis HTML5. Semua perangkat saat ini telah mendukung dan menggunakan aplikasi browser berbasis HTML5.

Dengan arsitektur diatas, maka Awingu tidak akan melakukan banyak perubahan di lingkungan aplikasi dan data yang telah ada di perusahaan. Awingu hanya akan menjadi 'gateway' akses dari perangkat untuk bisa mengakses aplikasi dan data. Awingu sendiri berbentuk virtual appliance yang bisa dipasang di HyperV, KVM dan VMWare. Sehingga instalasi dapat dilakukan dengan mudah dan cepat.
User yang semula mengakses aplikasi dan data melalui komputer mereka di kantor, akan bisa mengakses melalui satu halaman URL yang dipublish sebagai server awingu.

Mereka akan melihat aplikasi-aplikasi yang biasa mereka akses dan files data yang bisa mereka akses di dashboard Workspace. Kumpulan aplikasi yang di-set agar bisa digunakan bisa dilihat di tab Applications. Dan file kerja yang mereka akses akan bisa diakses di Files. Semua ini dapat saling terhubung dengan aplikasi dan data yang telah mereka miliki di kantor, termasuk sinkronisasi ke file yang ada di cloud seperti dropbox ataupun google drive.
Dengan cara ini, semua karyawan bisa tetap menggunakan aplikasi dan data file yang ada, dari perangkat mereka, baik menggunakan laptop, tablet termasuk smartphone mereka. Dan semua akan tercatat dengan baik di audit log dari Awingu.
Jadi tunggu apa lagi, Awingu kisaran 3jt / user dengan minimal pembelian 5 user concurrent (bukan named user), sehingga user bisa digunakan secara banyak.
Informasi lebih lengkap tentang Awingu bisa diakses di www.awingu.com, atau menghubungi kami untuk presentasi implementasi Awingu di perusahaan / instansi anda. Email: askme@dayaciptamandiri.com, HP : 08121057533.


Monitoring Ubiquiti UniFi WiFi with PRTG

$
0
0

Monitoring Ubiquiti UniFi WiFi with PRTG: Total Insight into UniFi Environments

Ubiquiti Networks offers a range of wireless hardware and software for enterprise WiFi and operator WiMAX wireless data communication. In this article, we’ll explain how to monitor Ubiquiti’s UniFi WiFi systems using PRTG.
One way to monitor each of your UniFi access points is with PRTG’s standard SNMP sensors together with Ubiquiti’s private MIB files. This lets you monitor each access point in depth, giving you insight into any data available in the MIB file.
iSNMP stands for Simple Network Monitoring Protocol. Its usefulness in network administration comes from the fact that it allows information to be collected about network-connected devices in a standardized way across a large variety of hardware and software types. SNMP is a protocol for management information transfer in networks, for use in LANs especially, depending on the chosen version. Read more ...

However, monitoring single access points via SNMP has a few disadvantages:
  • The UniFi series only supports SNMP v1 (see screenshot below)
  • You will need SNMP access to each access point, which could be an issue if the access points are spread across multiple locations
  • You can only view details about one access point at a time, and can only display metrics that one access point knows. Global metrics for the entire installation aren’t available from a single access point because one AP simply doesn’t have a global overview.
ubiquiti.png
Ubiquiti UniFi SNMP Settings
So, to improve visibility into your UniFi environment, we’ve created a new custom script sensor to monitor the controller directly, giving you an overview of all of your access points in a single sensor.
A huge thanks to Luciano Lingnau from our technical support team, who published the following script in our Knowledge Base
This script uses the UniFi RESTful API to pull data into PRTG, where it will then display controller metrics such as:
  • Response time from the controller’s API
  • The number of access points connected to the controller (UAPs in “connected” status)
  • The total number of connected clients, including guests
  • The total number of connected guests
  • The number of upgradeable access points (UAPs in “connected” status, with the “upgradeable” flag set)
ubiquiti-1.png
Since this script collects all data directly from the UniFi controller, you’ll see a global overview about all access points that are connected to that controller. And you only need HTTP access to the controller – you don’t need SNMP access to each individual access point. To run Luciano’s script, check out his article, which includes the requirements, detailed instructions, and (of course) the code for the script.
In addition, Frank Carius, a German blogger, has extended this script to include:
  • Amount of data
  • Clients/ virtual network
  • RX and TX bytes, dropped and errors
  • ….
Frank’s blog article is only available in German, BUT even if you can’t read the article, you can download the script (which is commented in English!) here.
ubiquiti-2.png.gif
The extended script in action
To run the extended script you’ll need to download the file and save it to
C:\Program Files (x86)\PRTG Network Monitor\Custom Sensors\EXEXML
Then, before you run the script, you need to set a few parameters, either inside the PRTG custom script sensor settings, or directly inside the script:
msxfaq1.png
You’ll need to adjust all of the parameters to fit your environment.  The “httppush.url”, for example, is set to “ubiquiti-” in the example script. This is used to create the GUID for the PRTG HTTP Push sensor later in the script:
The GUID consists of the beginning of the URL, the name of the SSID, and the frequency. If you run the script interactively with “-verbose”, you can easily find the URL:
msxfaq3.png.gif
This script is capable of monitoring both the controller and all access points connected to that controller. Detailed per access point metrics will require one additional HTTP Push sensor per access point.. The HTTP Push sensors need to listen on port 5050 and must have GUIDs that match the string(s) shown above.
Once the sensors are up and running, and are receiving data from the script, you can use thresholds and notifications, just like with any other PRTG sensor.
And if you have other devices that aren’t covered by the pre-built PRTG sensors, be sure to check out our Script World site for lots more scripts!

Monitoring a KEMP LoadMaster Using PRTG

$
0
0

Monitoring a KEMP LoadMaster Using PRTG: A Detailed How To Guide

There are a lot of solutions to choose from that does load management and security for services, in this article we will be looking at Kemp Loadmaster.
KEMP Technologies offers “KEMP Loadmaster”, which, as the name implies, balances application loads between web servers. Their product offering includes both virtual and hardware appliances. The virtual offering supports most hypervisors and cloud deployment, and they have hardware appliances of varying sizes to support larger loads.

iVirtualization is the process of creating a virtual version of something like computer hardware. It involves using specialized software to create a virtual or software-created version of a computing resource rather than the actual version of the same resource. Read more ...

One of the great things about the KEMP, like PRTG, is that there's a free version available for download. You can download a virtual appliance and run it in your virtual environment. The free version has some restrictions but it’s great for testing, configuration, and lab use. The KEMP LoadMaster also does Reverse Proxy (we’ll refer to it as "Rproxy"), which, in addition to monitoring, is the focus of this article.

Monitoring KEMP LoadMaster

PRTG is a network monitoring tool, so we want to monitor the device status and performance. The KEMP provides this in a couple of ways: through SNMP and through their REST API.
The default discovery of the device gives us the generic SNMP sensors.
ping.png
The information we are really interested in is how much traffic is it handling.
Using the MIBs supplied by KEMP and the Paessler tools, I created a template for the LoadMaster (available for download from PRTG Script World or directly from GitLab.com/PRTG). The template uses the SNMP Custom Advanced and SNMP Custom Table sensors to get some more information (the process is outlined in this Webinar "SNMP MIB basics – Monitoring with PRTG").

kemp-lm.png
The template creates a health sensor, with a high level overview of the KEMP status and performance.
sensor-kemp2.png
It also creates a sensor for each Virtual and real server with Metrics.
sensor-vsrv.png
sensor-rSvr2.png
  • You may have noticed the sensors:
    • rSrvr: 172.30.0.171:23560 Nat 1000
    • vSrv: PRTG1-RP(192.168.0.171)
      and
    • rSrvr: 172.30.0.171:80 Nat 1000
    • vSrv: PRTG1-Web(192.168.0.171)
These are the sensors with performance metrics for the real and virtual servers. In particular, these are the sensors that measure the statistics related to the Rproxy that does the SSL offloading for PRTG’s remote probe and PRTG Web GUI respectively.

Monitoring KEMP Using the REST API

In PRTG version 17.3.33/34 we added a new sensor, the "REST Custom Sensor", which can also be used to get a top-level overview on how the KEMP is doing.
sensor-rest-custom.png
screenshot-rest-specific.png
So, the next question is how do you configure it.
For installation details of the KEMP, please refer to the instructions on the manufacturer site. 
Please note: We have carefully compiled this information and it is provided to the best of our knowledge. As the solution is not part of PRTG itself, it is not officially supported by Paessler or PRTG Technical Support. Yet, we wanted to share it with you as it might be of interest for many PRTG users.
You must also be aware, that if you configure any of the parts incorrectly, you may leave yourself open to an intruder gaining access to anything configured within PRTG. This includes User ID’s, Passwords, ip’s names, etc. IE no warranties expressed or implied. Paessler, its employees or partners cannot be held liable for any damages that you may incur as a result of employing a Reverse Proxy.

PRTG Monitors dengan UVExplorer

$
0
0

PRTG Monitors

If you monitor your network with PRTG Network Monitor, UVexplorer makes a great companion product to PRTG. In addition to exporting devices, sensors, and maps to PRTG, UVexplorer can also query device status (Up, Down, etc.) from PRTG so that device status can be viewed within UVexplorer's maps and reports. This allows PRTG's powerful monitoring platform to enhance UVexplorer's network discovery and mapping capabilities.
At any point in time, PRTG can tell you what the overall status of a device is (Up, Warning, Down, Paused, etc.). If you want to display device status within UVexplorer, you can create one or more PRTG monitors that query device status from PRTG on a regular schedule (e.g., every 5 minutes). For example, suppose you want to display the state of your core networking devices in UVexplorer (routers, switches, firewalls, etc.). To do this, go to UVexplorer's Monitors tab, and create a PRTG Monitor on your core networking devices. Give it a schedule that meets your needs (e.g., every 5 minutes). Every time this monitor runs, it will query the state for the specified devices from PRTG, and store the state information in the UVexplorer database. UVexplorer will then display these device states within its maps and reports.
Creating a PRTG Monitor
To create a PRTG Monitor, do the following:
  • Go to the Monitors tab (at the bottom of UVexplorer's main window)
  • Select "PRTG Monitors" on the left side
  • Right-click in the grid on the right, and select Add from the context menu. This will display the PRTG Monitor configuration form. This form lets you select the PRTG server to be queried, the set of devices for which status should be queried, and the schedule on which UVexplorer should query the PRTG server for status (see image below).
 
Viewing PRTG Device State in UVexplorer
After creating a PRTG Monitor, you can view the status of the monitor's devices (according to PRTG) in the UVexplorer user interface (see images below).
 
You can also view the history of a PRTG monitor by right-clicking on the monitor, and selecting the "PRTG History" option from the context menu. This report displays a state history for each device in the monitor. This lets you see how the state of each device has changed over time (see image below).
 
Auto-Creating PRTG Monitors at Export Time
When you export devices to PRTG, you are given the option of having UVexplorer automatically create and configure a PRTG Monitor on all exported devices so that you can view the states of those devices in UVexplorer. If you use the PRTG export wizard to do your export, the wizard will ask you if you want UVexplorer to auto-create a PRTG Monitor on the exported devices. Or, if you export directly from a map by selecting "Export to PRTG", the export form give you the option of auto-creating a PRTG Monitor on the devices. Either way, after completing the export, you can go to UVexplorer's Monitors tab, and see the auto-configured PRTG monitor (see images below).
  
Linking from UVexplorer Devices to PRTG Devices
When using UVexplorer with PRTG, it is often convenient to jump between the two environments. For example, if UVexplorer shows a device as being "Down", you will probably want to jump into PRTG to further investigate the situation. To make this easy, in UVexplorer you can right-click on a device, and select the "Show in PRTG" option. Doing this automatically opens PRTG in a web browser, and displays the PRTG device corresponding to the selected UVexplorer device. You can then inspect the states of the device's various PRTG sensors. This works in both UVexplorer device lists and maps (see image below).

PRTG Export dengan UVExplorer

$
0
0

PRTG Export

If you monitor your network with PRTG Network Monitor, UVexplorer makes a great companion product to PRTG. UVexplorer can export devices and network maps to PRTG, and also automatically configure PRTG device sensors. This allows you to discover devices in UVexplorer, export them to PRTG, and monitor them in PRTG. This gives you the best of both worlds: UVexplorer's detailed and fast network discovery capabilities, combined with PRTG's advanced network monitoring capabilities.
UVexplorer integrates directly with PRTG to provide the following powerful features:
Enhanced Discovery of Network Devices and Connectivity - UVexplorer's discovery is fast, detailed, and accurate. Run network discoveries within UVexplorer, and export discovered devices to PRTG for monitoring.
Automatic Device Sensor Configuration - When UVexplorer exports devices to PRTG, it automatically configures sensors on those devices in PRTG. Rather than creating all possible sensors, UVexplorer creates only those sensor types that you request, which minimizes your sensor count. Specifically, UVexplorer can automatically configure the following PRTG sensors:
  • Ping Sensors
  • SNMP Uptime, CPU, Memory, and Traffic/Interface Sensors
  • WMI Uptime, CPU, Memory, and Disk Space Sensors

Enhanced Network Maps - UVexplorer automatically creates detailed maps of your network, including details about how your devices are connected at the port level. UVexplorer's map editor makes it easy to create high-quality maps, and then export them to PRTG. This gives you great maps in PRTG, with UVexplorer doing most of the work automatically.

Scheduled Network Discoveries - UVexplorer keeps your PRTG devices, sensors, and network maps continuously up-to-date by running network discoveries on a scheduled basis (hourly, daily, weekly, etc.) After completing a scheduled discovery, UVexplorer automatically exports new devices and updated maps to your PRTG server, and also configures sensors on new devices.
Detailed Device Inventory - UVexplorer discovers detailed inventory information about your network devices, and lets you run reports across all devices on your network. The following inventory data and reports are available:
  • Asset details (make, model, serial number)
  • Operating System Version
  • Software Inventory
  • Network Interfaces, Bridgeports, and VLANs
  • Device Connectivity

PRTG Export Wizard

UVexplorer discovers all of your network devices and connections in a matter of minutes. At any time after discovery completes, you can export your network devices and maps into PRTG by clicking the "Export to PRTG" button in the Home toolbar (see below). Clicking this button will start the PRTG export wizard, which steps you through the various export options.
 
Clicking Next on the wizard start page takes you to a page that asks you to select the PRTG server you want to export to.

If you have never before defined credentials for your PRTG server, click the settings button to define those credentials. You will be asked to type in the URL, username, and passhash for the target server. (Your passhash can be found in the PRTG web browser interface by selecting the Setup -> Account Settings -> My Account menu option.)

Clicking Next on the PRTG Server Settings page takes you to a page that lets you specify the name of the PRTG device group your UVexplorer devices will be exported to. You can send your devices to an existing PRTG device group, or specify that a new device group be created. If you export devices to an existing PRTG device group, UVexplorer will only add new devices that are not already in the target group (i.e., duplicate devices are not created). You can also specify an existing parent device group within which the new device group will be created. In addition to creating a device group, you can also ask UVexplorer to create a network map in PRTG. The exported map will contain all of the exported devices, including the physical connections between them. You can specify whether a new map should be created, or an existing one should be overwritten. If you overwrite an existing PRTG map, UVexplorer will only add new devices that are not already on the map (i.e., duplicate map devices are not created).

Clicking Next on the Device Group/Map page takes you to a page that lets you select the devices to be included in the exported device group. You can select any or all devices in the discovery result that is currently open in UVexplorer.

Clicking Next on the Devices to Export page takes you to a page that lets you specify what kinds of sensors should be created on the devices in PRTG. Often, you only want one or two kinds of sensors created, not all possible sensors. By exporting devices from UVexplorer into PRTG, you can get only the sensors that you really want, thus minimizing your sensor counts.

Clicking Next on the PRTG Monitor Settings page takes you to a page that lets you initiate the export to PRTG.

Clicking Finish on this page will begin your export. The PRTG export will appear. It displays all of your selected export settings, and also provides feedback on the progress of the export.

After the export is complete, the "Goto Group" and "Goto Map" buttons on the export form will be enabled. Clicking "Goto Group" will open a web browser, and take you to the PRTG device group that was created by the export.

Similarly, clicking the "Goto Map" button will open a web browser, and take you to the PRTG map that was created by the export (if any).

Through the PRTG user interface you can view the sensors that were automatically configured by UVexplorer on the exported devices, as shown below.
 

Exporting Directly from UVexplorer Maps

You can also export devices to PRTG without using the PRTG export wizard. This may be more convenient once you are comfortable with the export process. Specifically, you can export the devices on any UVexplorer map by right-clicking on the map, and selecting the "Export -> Export to PRTG" option (see image below).

This displays the PRTG export form, which lets you enter all of the export settings on one form (instead of using the PRTG export wizard). After entering your export settings, click the Export button to export all of the map's devices to PRTG.

Five 'E' for your career

$
0
0
The 5  "E" words that you  need for your career


A young manager in his early 30s asked me what it takes to be successful in our career?

I am in HR roles for more than 20'years (including roles with Global and Asia Pacific coverage in different companies) and I observe certain characteristics from successfull executives in different countries.

This is what I observe from them...

Enthusiasm

They show up every moning full of energy and spread the positive aura to others.

Emotional Ifntellitence

They manage and control their emotion very well in their role as team player, leader and influencer.

Edge

They always upgrade their knowledge and competences. They read and learn from others, everyday!

0.Execution

They deliver what they promised, beyond the slides that theh presented. The key words are excellence in execution

0.Empowering others

In their role as leader, they are not busy anymore doing his work. They empower others. Now they are busy coaching and developing others.

Warm Regards,

Pambudi Sunarsihanto

Fanky Christian
mobile: 08121057533
fankychristian.blogspot.com

Monitoring Smart City dengan PRTG

$
0
0

Why Are We Making Cities Smart?

Everything seems smart these days: smart phones, smart homes, smart watches and smart cities. It seems humans are devising more and more “Things” to think for us. Does that mean humans will become more stupid? Whatever your stance on this, the technological advancement behind it is recognizably impressive.

What Makes A City Smart?
According to Wikipedia, a city is smart if it integrates information and communication technology (ICT), and Internet of Things (IoT), in a secure fashion to manage its assets.
The interconnectivity of these assets allows the city to monitor what’s going on in the city, how it’s evolving and how to enable a better quality of life.
Examples of things monitored in smart cities are:
  • Waste Management – monitoring the fullness of public waste bins around the city, so they are only emptied when full (saving costs and reducing congestion).
  • Parking Sensors – these show you availability of parking spots in a city. There are apps that tap into this data, making it easier for drivers looking to park. Not only saving us time, it saves on fuel, and reduces emissions and congestion.
  • Security – integrated sound sensors can detect gun shots and automatically report it to the authorities, reducing necessary involvement of citizens while making the city feel safer.
The possibilities are endless and the technology advances minute-by-minute.

Who Builds Smart Cities And Why?

It’s city leaders who are recognizing the potential of technology to make their cities safer, convenient and more comfortable for its residents. In some instances, it may also be for prestige and branding.
Whatever the agenda, it’s changing the lives of city-dwellers and putting additional pressure on IT infrastructures supporting the interconnectivity of ‘Things’. There’s increased traffic and data being transferred, which impacts load and bandwidth.

What Are We Doing In This Space?

Our partner Daya Cipta Mandiri is funding a Smart City Center show room in Mangga Dua Square Jakarta, Indonesia, supporting the concept of smart cities as a means to enhance the quality of life.
Their mission is to educate, train and assist the local government in developing smart cities, and to raise public awareness.
Most of the smart city projects, they say, “start with infrastructure projects like CCTV and datacenters, which all require monitoring.” They install PRTG to monitor such infrastructures and develop custom dashboards to manage the volume of data they get back.

Here is one of the dashboards developed using PHP/ Java and PRTG’s API.
smart-city2.jpg
It’s clear that as cities get smarter, IT must too. Smart cities have sensors monitoring things like parking spaces, waste bin capacity and security cameras, but who is monitoring the monitors?
Smart cities need to be equipped to manage the data load and connectivity of IT assets on the network if they are to uphold the convenience and security they promise their residents. It needs a human to recognize this and take action; A smart one.
source: https://blog.paessler.com/why-are-we-making-cities-smart

Magic Quadrant for General-Purpose Disk Arrays

$
0
0

Magic Quadrant for General-Purpose Disk Arrays

Published: 31 October 2017 ID: G00319539
Analyst(s):
 

Summary

Storage vendor consolidation, competition from SDS vendors and cloud providers, and new sales and support models are continuing to change the storage market. I&O leaders who understand the opportunities and risks created by these changes will make better infrastructure refresh decisions.

Market Definition/Description

General-purpose storage arrays are designed to satisfy the storage needs of applications running on physical or virtual servers. Block and file protocols (such as FC, iSCSI, NFS and SMB) continue to dominate this market. Gartner segments this market into the general-purpose disk array market, which includes all disk and hybrid arrays, and the solid-state array (SSA) market. This Magic Quadrant excludes SSA, object and distributed file system storage, as well as software-defined storage (SDS), because they have their own Magic Quadrant and/or Critical Capabilities research.

Magic Quadrant

Figure 1. Magic Quadrant for General-Purpose Disk Arrays
Research image courtesy of Gartner, Inc.
Source: Gartner (October 2017)

Vendor Strengths and Cautions

DDN

DDN is a privately held company that focuses on delivering storage solutions for high-performance computing (HPC), media and analytics use cases. DDN's strength in these market segments is highlighted by its OEM agreements with Hewlett Packard Enterprise (HPE), Dell, Fujitsu, Atos, Cray, IBM and Lenovo, and are a reflection of the management team's deep engineering roots. DDN's hybrid storage product portfolio mainly consists of the SFA14K and the smaller SFA7K series. NAS and object storage functionality is provided by configuring the SFA7K and SFA12K systems as the back-end storage, namely the GRIDScaler and EXAScaler platforms. EXAScaler is built on the Lustre file system and positioned for HPC use cases, while GRIDScaler leverages IBM Spectrum Scale (based on GPFS) and is positioned for big data use cases. DDN also offers vertical-specific solutions, namely BIOScaler and MEDIAScaler for life sciences and media content production workflows, respectively. The Storage Fusion Xceleration (SFX) flash cache accelerates file system performance and supports multiple configurations depending on the pattern of reads and writes. DDN has its support and professional services teams present in all major geographies that help the vendor to respond to specific customer requirements adequately.
STRENGTHS
  • DDN has significant deployments and mind share in the HPC market, where its arrays offer high-performance storage as well as archiving capabilities.
  • DDN has vertical market solution specialists in life sciences, manufacturing, media, oil and gas, and finance, giving it expertise within its target verticals.
  • The SFA series provides deep integration with VMware and Microsoft Hyper-V storage APIs.
CAUTIONS
  • Data services such as compression, deduplication and writeable snapshots are available only when configured as enterprise fusion architecture (EFA).
  • The SFA series does not support file protocols natively, thus cannot position itself as unified storage for general-purpose use cases.
  • DDN's lack of mind share in the general-purpose storage market makes standardizing on its technology more difficult for storage architects seeking simplification.

Dell EMC

Dell's acquisition of EMC is now more than a year old. While organization and personnel adjustments are still occurring, the new organization's strategy for product rationalization is now in much clearer focus and can best be summarized as, "Do no harm while repositioning the company and its product portfolio." This has translated into maintaining investments in Unity, VMAX, Isilon and SC (aka Compellent) series storage arrays, minimizing interseries competition and creating "better together" integrated infrastructure solutions that leverage heritage Dell-owned technologies without putting partner relationships at risk. Omitted from this list is the PS series, which is on the road to end of life.
Dell EMC is taking advantage of its status as a privately held company by prioritizing the creation of lease agreements that publicly traded companies have difficulty in countering, the development of indirect channel-centric marketing and sales programs, and maintaining an image of technology leadership over other key performance indicators, such as individual transaction profitability and R&D to revenue ratios. Much of Dell EMC's sales success in the EMC installed base is attributable to its success in upgrading its installed base with Unity, VMAX and Isilon arrays configured with solid-state drives (SSDs), and product enhancements that improve staff productivity and business continuity. Dell Storage Center Operating System 7 (SCOS 7), the latest version of the SC microcode, is providing similar functional enhancements to the Dell Compellent installed base; of particular note is SCOS 7's support of Live Volume and compression and deduplication across SSD and hard-disk drive (HDD) tiers of storage.
STRENGTHS
  • Innovative rental and lease offerings, multiyear enterprise license agreements, and the SC series' perpetual right to use software license agreements that waive one-time charges when doing array refreshes keep Dell EMC on many end-user shortlists.
  • A broad portfolio of storage arrays that are competitive within their respective market segments enables customers to choose storage solutions that optimally align with application needs without complicating vendor management.
  • Dell EMC's faster-than-forecast paydown of its debt and its broad product portfolio are keeping it a "safe choice" in a chaotic market.
CAUTIONS
  • Dell EMC's changes in senior management, loss of experienced sales and support personnel, and increased emphasis on hyperconverged integrated systems (HCISs), cloud and the Internet of Things (IoT) are disrupting customer relationships and plans and creating opportunities for competitors.
  • Managing a disparate collection of Dell EMC storage systems complicates vendor and asset management and adversely affects operational efficiency.
  • Dell EMC's strategy of "better together," a slowdown in the cadence of significant product enhancements, and limited product rationalization may be precursors to declines in array competitiveness.

Fujitsu

Fujitsu's high-level direction is to provide a solution and service business, rather than specific point products. Nevertheless, Fujitsu also resells the NetApp FAS and AFF series arrays, in situations when a customer requires a large file-storage-oriented solution. Due to this, the Fujitsu marketing and sales emphasis is not solely on products, but also on the wider IT solution, and therefore customers are not as aware of Fujitsu storage array marketing programs as they are with product-oriented competitors. Taking this into account, Fujitsu offers the Eternus DX S3 series of storage arrays, which consists of the midrange DX500 and DX600, and the high-end DX8700 and DX8900 models. The DX500 S3 and DX600 S3 have not been upgraded since November 2013. Fujitsu recently made available the highly competitive AF250 and AF650 SSAs in November 2016, which use the same administration GUI and can be used to cluster and replicate data from and to disk arrays. While Fujitsu's long-term direction is toward SSAs, the vendor also enables the integration of the Eternus DX into OpenStack environments by providing the OpenStack integration Cinder driver/software for Eternus DX storage. Fujitsu has provided customers with transparency for the performance and pricing of its storage arrays, and continues to do so with a published SPC-2 benchmark that became available in May 2016.
STRENGTHS
  • All DX series arrays share the same microcode, administration GUI, replication and clustering for high availability.
  • Fujitsu has deep and broad R&D and engineering resources that enable it to develop highly scalable multipetabyte DX8900 arrays and a family of hybrid arrays and SSAs, which all interoperate with each other.
  • Data compression and deduplication are available and included in the purchase price of the DX500 and DX600 arrays.
CAUTIONS
  • The high-end DX8900 arrays do not offer data compression or deduplication.
  • Fujitsu made available new DX60, DX100 and DX200 S4 entry-level arrays in May 2017, which can now scale to similar capacities and performance levels as the midrange S3 arrays.
  • Fujitsu does not offer a cloud gateway or connector or interface specifically for Amazon S3 or Azure with its storage arrays.

Hitachi Vantara

Hitachi Vantara (formerly known as Hitachi Data Systems [HDS]) is a company in transition. Changes in senior management and business strategies, reductions in workforce, and Hitachi's decision to focus Hitachi Vantara on IoT opportunities within its installed base suggest that storage marketing and sales resources will continue to be constrained for the foreseeable future. Hence, we expect Hitachi to continue to rely on its reputation for building reliable high-performance storage arrays, its customer base and its partners to support its storage business. While this is suboptimal from a storage growth perspective, this strategy does have the advantage of using the vendor's large global customer base and its well-respected worldwide support organization to maximum effect.
Hitachi has kept its Virtual Storage Platform (VSP) and Hitachi Network Attached Storage (HNAS) gateway offerings competitive by most measures of product attractiveness, particularly those that are important to business and mission-critical workloads. Sharing a common architecture and management tools from the smallest VSP G200 to the flagship VSP G1500 preserves customer investments in and policies and procedures, and leverages ecosystem-related investments. Hitachi-engineered Flash Modules (FMDs) are now available within all VSP G series systems. FMDs provide wire speed in-line data compression without consuming controller resources by offloading data compression overhead into a custom application-specific integrated circuit (ASIC). Data deduplication is restricted to Flash (FMD and SSD) layer with block workloads and across all tiers of storage with file workloads using the integrated NAS modules or HNAS gateways. Other product enhancements have focused on improving ease of use, developing tighter integration with VMware, developing a hybrid cloud solution and increasing the distance of the Global-Active Device (GAD) offering to 500 km.
STRENGTHS
  • A worldwide presence, a reputation for building reliable storage arrays, an effective support organization and being part of a large conglomerate have maintained Hitachi Vantara's persona of being a "safe" storage vendor.
  • The VSP's common architecture, administrative tools, interoperability, ecosystem and scalability simplify the sales cycle and align well with channel capabilities.
  • GAD, Hitachi Automation Director, Hitachi Infrastructure Analytics Advisor, and tiering to Amazon Web Services (AWS), Microsoft Azure and Hitachi Content Platform (HCP) are improving usable availability and staff productivity, and keeping the VSP on user shortlists.
CAUTIONS
  • Hitachi's change in company direction and focus on its installed base leaves it generally unable to influence the GPDA storage market and puts its ability to execute its new strategy at risk.
  • The vendor's lack of HDD-level compression and deduplication, coupled with a reluctance to compete on price, limits its appeal in price-sensitive customer environments.
  • Despite Hitachi Storage Advisor improvements, administrative and operational GUIs, and intuitiveness relative to Hitachi Command Suite, management complexity remains a problem within the VSP and, by inference, HNAS installed bases.

HPE

The Hewlett Packard Enterprise (HPE) storage array portfolio consists of the XP7, 3PAR StoreServ, StoreVirtual and HPE Nimble Storage CS-Series arrays. The CS-Series of midrange arrays was added to the HPE portfolio with its acquisition of Nimble Storage in April 2017. Nimble Storage sales, marketing and operations have been fully merged into centralized HPE corporate functions. Nimble R&D and support operate as a group and report to a Nimble team within the storage business. The Nimble CS-Series is clearly positioned to complement, not replace, the HPE 3PAR StoreServ series. Nimble's InfoSight will also enhance HPE's service and support effectiveness by improving remote support, analysis and predictive monitoring. Differentiation in arrays is primarily in architecture, scale, ease of use, and breadth of protocol and server support. Therefore, for the most common and standard use cases, such as server virtualization, all of the arrays in the HPE storage portfolio can be successfully implemented. The HPE 3PAR StoreServ series of arrays was recently enhanced with the 20000 R2 series, which became available in June 2017. The HPE XP7, which is sourced from Hitachi Ltd., Japan, is sold into very specific point solutions where niche protocol connects are required and, therefore, HPE rarely leads with the HPE XP7. Instead, it leads with the 3PAR StoreServ and Nimble CS-Series arrays for the majority of customer requirements.
STRENGTHS
  • Existing HPE 3PAR customers will be able to benefit from HPE InfoSight's fault monitoring and predictive preventative maintenance analysis.
  • HPE Nimble Storage provides competitively priced storage arrays due to an array design that uses 100% industry-standard commodity components.
  • CS3000 and CS5000 arrays' support of HDD-level compression and deduplication improves CS-Series economics across a broad range of workloads.
CAUTIONS
  • 3PAR and Nimble arrays are not compatible; they use different GUIs and cannot replicate between each other.
  • A lack of 3PAR HDD-level compression and deduplication makes comparing 3PAR versus CS-Series ownership costs difficult because such costs become a variable that is influenced by workloads.
  • Rapidly integrating Nimble Storage into HPE will almost assuredly adversely impact existing sales and support relationships.

Huawei

Huawei is a large and growing provider of a broad range of hardware and software technology products. With a global reach, Huawei is leveraging its strong standing as a network equipment provider to the telecommunications industry to sell its enterprise IT products, servers and storage. With a late start, the vendor has emerged as a disruptive provider of general-purpose storage arrays, gaining share from the entrenched legacy vendors. Featuring a scale-out architecture, its OceanStor 5000 V3, 6000 V3 and 18000 V3 platforms cover a broad range of use cases in the midrange and high-end general-purpose storage array and NAS market. The OceanStor offerings have the performance and capacity scale, along with a strong set of controller-based data services, to meet most users' requirements. Since early 2016, Huawei has been strengthening its OceanStor controller-based software support for OpenStack, the public cloud and deployments requiring robust high availability. Huawei strategically engages with a broad range of value-added resellers, distributors, system integrators and cloud service providers to reach the end-user market.
STRENGTHS
  • Huawei's OceanStor operating system underpins its entire general-purpose storage array portfolio, unifying and simplifying management from entry-level to high-end platforms.
  • The OceanStor controller-based HyperMetro Active-Active architecture provides a cost-effective disaster recovery platform, enhancing high availability.
  • Huawei's supply chain, along with its efficient manufacturing processes, enable it to present cost-effective general-purpose storage arrays to the user community.
CAUTIONS
  • Huawei employs a software value-added pricing model that complicates contract administration and makes total cost of ownership projections more difficult to determine.
  • The vendor's postsales customer support is more reactive than proactive, as it only recently introduced a cloud-connected predictive-analytics-driven support system outside of China.
  • The geopolitical attitudes of leaders in some western countries may preclude organizations located in those countries from considering Huawei as a viable supplier of general-purpose storage arrays.

IBM

IBM's general-purpose storage portfolio mainly consists of the Storwize family, DS8000 series and the XIV series. While IBM continues to make incremental investments in the Storwize and DS8000 series, and positions these products aggressively, the cadence of XIV series product enhancements has slowed in the past year. IBM released the V7000 Gen 2+ and all flash variants, namely V5030F and V7000F, in August 2016. The V7000 Gen 2+ was released with Intel 10-core Broadwell processors and increased memory, and 16GB FC interfaces. The V7000 Gen 2+ can coexist with earlier versions of the V7000 in a single cluster. IBM also continues to release new versions of its SDS product, Spectrum Virtualize, at a regular cadence to minimize potential microcode interoperability problems. Enhancements to the DS8000 series include support for space reclamation and tiering to cloud-based object storage platforms. Tiering to cloud does not require an additional gateway device and can be used to archive aging datasets.
All three platforms — Storwize, DS8000 series and XIV Storage System — can be used as building blocks in VersaStack, IBM's integrated systems product.
STRENGTHS
  • The V7000 offloads compression to the compression accelerator card and thus has negligible performance overhead, as well as a compression guarantee of 2:1.
  • The Storwize family and DS8000 series support a broad range of hypervisors and data protection software.
  • The XIV supports secure multitenancy and integration with VMware, OpenStack and Microsoft Azure Site Recovery; all of this makes it suitable for private cloud and hybrid cloud deployments.
CAUTIONS
  • The DS8000 series' lack of native virtualization, compression and deduplication requires customers to deploy it behind a SAN volume controller (SVC) for these functions, with its accompanying cost and complexity.
  • The Storwize V7000/V5000 and XIV Storage System still do not have data deduplication, which is a common data service offered by most enterprise hybrid storage vendors.
  • Although the V7000 is a scale-out system, the lack of secure multitenancy makes it less attractive for private cloud deployments.

Infinidat

Infinidat, founded in 2011, is a privately held company reporting positive cash flow and profitability of InfiniBox series sales into the high end of the storage array market. InfiniBox differentiation centers on high availability, consistent high performance, autonomic operation, multiprotocol support, and much lower acquisition and ownership costs than competitors' high-end arrays. InfiniBox v.3 microcode, released in September 2016, includes in-line data compression, iSCSI and NAS protocol support, and improved performance analytics. Its optional data compression has a minimal impact on performance because data is only compressed when it is destaged from second-level cache (i.e., SSDs) to back-end HDDs.
InfiniBox has an intuitive web GUI and an architecture that takes ownership of data placement and reduces the skills needed to own and make effective use of InfiniBox F2000, F4000 and F6000 series storage arrays. InfiniBox's automation capabilities are further enhanced by the availability of software development kits (SDKs) and support of nonblocking RESTful APIs. InfiniBox's use of three active controllers per system provides lots of compute power to reduce the impact of microcode updates and controller failures, and enables the ongoing development of new functionality that consumes CPU resources. Overall sales effectiveness is further enhanced by an all-inclusive capacity-based software pricing model, standard three-year 24/7 support and the waiving of installation fees.
STRENGTHS
  • Infinidat has achieved profitability with revenue growth balanced across North America, EMEA and the Asia/Pacific region, with new and repeat business.
  • The InfiniBox is a high-end, simple-to-use, multiprotocol, low-cost general-purpose storage array that is feature- and performance-competitive with more expensive high-end arrays.
  • Infinidat's investments in direct sales and technical specialist teams help it and its channel partners to deliver valuable pre- and postsales services to large global enterprise accounts.
CAUTIONS
  • Infinidat's focus on high-end storage leaves it more vulnerable to established vendors with large product portfolios that can use their financial resources to create nonproduct barriers to entry into large accounts.
  • InfiniBox does not yet offer data deduplication, synchronous replication, metro or stretch-cluster, or three-site replication.
  • InfiniBox's lack of 16 Gbps and 32 Gbps FC and 25 GbE support and a 24-port maximum may result in increased connectivity costs and limit usable scalability in input/output (I/O)-intensive environments.

Infortrend

Infortrend is a small, but established provider of a broad range of general-purpose storage arrays. With attention to detail and a steady cadence of R&D innovation, Infortrend has been developing and shipping entry-level to midrange general-purpose storage arrays for over two decades. Noted for being an early adopter of latest-generation HDDs and SSDs, as well as the latest Intel CPU sets, Infortrend's products generally deliver above average price/performance. Infortrend's midsize general-purpose storage systems have evolved over time from platforms that supported only block-access protocols to platforms that support multiple protocols, including block-, file- and object-access services. The EonStor GSe Series and GS Series include an integrated Cloud Gateway Engine that supports backup and archiving functions to Amazon S3, Microsoft Azure, Google Cloud Platform and Alibaba Aliyun. With market success in the Asia/Pacific region, Japan and EMEA markets, Infortrend exclusively reaches the end-user market via its channel partners.
STRENGTHS
  • Infortrend uses independent SPC-2 test results to validate EonStor price/performance standing among midrange general-purpose storage arrays.
  • Infortrend's longevity, over two decades, as an independent technology provider of general-purpose storage arrays illustrates that it is providing value to the user community.
  • Super capacitors, which last for the life of the storage system and require no maintenance, are paired with a flash module to protect against data loss due to power outages.
CAUTIONS
  • Quality of service (QoS), multitenancy and vCenter plug-ins are missing EonStor features.
  • Limited penetration in the Americas may unfavorably impact responsive service and support.
  • Infortrend's client care infrastructure does not include phone home or cloud-connected analytics support.

Inspur

Inspur is a China-based information and communication technology (ICT) vendor that is well-known for its leadership in the server market, primarily targeted at cloud service providers. With a strong presence in the hyperscale market, Inspur has expanded into the global market and made inroads in the service provider segment. However, it also has a comprehensive portfolio of entry-level, midrange and high-end storage, as well as SDS products. Inspur positions the AS5000G2 series as the midrange storage product, and targets the AS18000 at the high-end market. The AS18000 supports both iSCSI and FC block protocols, but does not support file protocols. Data services, namely snapshots, cloning, encryption, local mirroring and QoS, are bundled as part of the base license. Additional services, such as remote replication, virtualization and tiering, require separate licenses. Licenses are priced per system for the midrange systems and priced per controller pair for the high-end systems. Inspur sells a majority of its products directly, and has very few channel partners outside the Asia/Pacific region and Japan.
STRENGTHS
  • The Inspur AS18000 series supports a broad range of hypervisors and backup vendors.
  • The AS18000 series also supports local mirroring and snapshots, as well as remote replication in both synchronous and asynchronous modes, thus providing multiple levels and types of data protection.
  • The autotiering feature supports four different tiers of storage, as well as the ability to archive data to the cloud via an embedded cloud gateway.
CAUTIONS
  • Inspur lacks significant presence and overall brand awareness outside of Greater China.
  • The AS5000G2 supports in-line compression, but does not support data deduplication. The AS18000 lacks data reduction technologies, such as compression and deduplication.
  • A lack of secure multitenancy may be impediments when customers evaluate Inspur as a solution to deliver infrastructure as a service.

Lenovo

Rather than applying R&D resources to acquire or develop an enterprise storage stack, Lenovo has opted to partner with other technology providers to establish a presence in the enterprise storage array market and to complement its server offerings. Lenovo offers two Lenovo branded product families that span the entry-level to midrange enterprise storage array market — the DS Series and the V Series. Lenovo sources the DS Series from a third party under an OEM arrangement, selling it under the new ThinkSystem DS Series brand. The ThinkSystem is an umbrella brand covering Lenovo servers, storage and networking products. It sources the V Series from IBM under an OEM arrangement, selling it under the established Lenovo Storage V Series brand. Emphasizing leading-edge price/performance attributes, the DS6200 offers a basic set of data service features and limited scalability. The more full-featured V5030 offers more than twice the scalability of the DS Series, as well as a richer set of data service software.
STRENGTHS
  • Lenovo has a global service and support organization that makes it more appealing to international corporations.
  • Both the DS Series and V Series storage arrays are proven technology with an established record of incremental enhancements.
  • Utilizing IBM's Spectrum Virtualize software at its core, the Lenovo Storage V Series storage array facilitates data migration from other enterprise storage arrays.
CAUTIONS
  • Lenovo's reliance on OEM relationships limits its control over product roadmaps and the ability to create differentiated storage offerings.
  • Lenovo does not offer a cloud-connected client support infrastructure for the ThinkSystem DS Series and Lenovo Storage V Series.
  • Selling architecturally dissimilar arrays that lack interoperability or common management tools complicates infrastructure management and sales cycles.

NEC

NEC is an established Japan-based technology and service provider. In the last two years, NEC has made efforts to increase its brand awareness in the U.S., and is working with channel partners to expand its reach in this market. Over the past year, NEC has made several updates to its flagship storage product line, the Mx10 Series. NEC entered the SSA market with the release of the Mx10-F Series. It has established key technology partnerships with Commvault, Veritas Technologies' NetBackup and Veeam for backup, and Milestone Systems for video surveillance solutions. From security and compliance standpoints, it supports audit trails, role-based access control (RBAC) capabilities and integration with third-party antivirus scanning engines such as Trend Micro ServerProtect for Microsoft Windows, McAfee VirusScan Enterprise and Symantec Protection Engine. NEC is an active contributor to OpenStack and has regularly released OpenStack Cinder drivers for FC and iSCSI in the last year. It also provides verifiable and independent performance benchmarks by publishing SPC-1 benchmarks. Software licensing for the Mx10 Series is controller-based and does not depend on the capacity of the storage procured.
STRENGTHS
  • NEC offers deep integration with VMware environments and supports VMware Virtual Volumes (VVOLs), VMware Storage Replication Adapters and vCenter plug-ins.
  • The NEC Mx10 Series uses MAID technology, which reduces power consumption by automatically turning off HDDs when they are in idle state.
  • The Mx10 Series integrates with NEC's disk-based backup and archive deduplication appliance, HYDRAstor, via DirectDataShadow software, thus streamlining the backup and archive process.
CAUTIONS
  • Compression and deduplication are supported via a third-party appliance, SANblox from Permabit, which was recently acquired by Red Hat, resulting in deployment complexity and increased risk of early product obsolescence.
  • The lack of a significant presence outside of Japan means those customers must carry out a more comprehensive due diligence of NEC's local postsales support capabilities.
  • The Mx10 Series requires procuring an additional NAS gateway to enable file protocols.

NetApp

NetApp is leading its array value proposition with the Data Fabric architecture, which is designed to improve agility and enable customers to exploit the hybrid cloud. To enable this from a storage array perspective, NetApp has positioned its FAS series as general-purpose storage and its E-series as storage building blocks for use with single-application workloads such as video surveillance, technical computing, and backup and recovery, and 100Gb NVMe-over-InfiniBand for HPC workloads. The FAS8200 and FAS9000, which were made available in October 2016 and November 2016, respectively, and the E5760 and E5724, featuring the new SANtricity 11.40 OS, were released in September 2017. NetApp made the Ontap 9.2 version of array software available for the FAS arrays to provide additional data fabric features and capabilities that integrate with cloud services and provide common data management across storage arrays, cloud storage, SDS and hyperconverged solutions. While the value proposition of data management rather than storage management resonates well with customers, it is very complex to implement and support in a heterogeneous storage vendor environment; therefore, most customers' successes will be in a homogeneous NetApp infrastructure. Ontap software is now available as stand-alone SDS software that customers can implement on servers in the cloud (AWS and Microsoft Azure) and on industry-standard servers for remote offices and converged systems.
STRENGTHS
  • NetApp's realigning its storage array series to clearly identified use cases and focusing on all-flash FAS has improved customer confidence in the vendor's future direction.
  • The Data Fabric architecture integrates NetApp products with the cloud and is an intelligible, easy-to-understand strategy.
  • Ontap 9.2 software provides agility and data management features for application developers in a transparent manner, which makes the underlying storage array hardware transparent.
CAUTIONS
  • The E-Series provides commodity-level functionality, competes on price and is often considered a tactical, rather than strategic, product choice.
  • NetApp now sells more SSAs than disk-based general-purpose arrays.
  • NetApp is not using an all-inclusive software pricing model with its FAS arrays, but a set of base and advanced bundles that somewhat simplify acquisition cycles and deployment decisions.

Oracle

The Oracle ZFS Storage Appliance is a technologically mature, general-purpose enterprise storage array platform. The ZFS Storage Appliance's underlying in-memory architecture produces high input/output operations per second (IOPS) performance and low latency, and comes with a rich set of data services. The ZFS Storage Appliance Analytics software, DTrace, provides users with fine-grained visibility into the elements of a ZFS infrastructure, as well as into Oracle Database 12c pluggable databases and virtual machines (VMs) in server virtualization environments, helping to optimize performance and capacity, and troubleshoot issues affecting business applications. Building on the ZFS Storage Appliance core architecture and data service platform, Oracle has implemented specific features to enhance integration, ease of use and performance with Oracle applications and databases. To further support its cloud initiative, the vendor has added a built-in cloud gateway to the ZFS Storage Appliance, enabling users to seamlessly place and retrieve data on the Oracle Public Cloud platform.
STRENGTHS
  • Oracle offers multiple host interface protocols, enabling support for SANs and NAS infrastructures.
  • Independent SPC-2 test results provide transparency regarding ZFS Storage Appliance performance.
  • Hybrid Columnar Compression (HCC) and Oracle Intelligent Storage Protocol (OISP) maximize the capacity utilization and performance associated with Oracle Database 12c.
CAUTIONS
  • The ZFS Storage Appliance does not support a native interface to AWS or Microsoft Azure.
  • Multitenancy support is a missing feature in the ZFS Storage Appliance platform.
  • Adoption of the specific features associated with Oracle applications and databases raises concerns about vendor lock-in.

Promise Technology

Promise Technology has a portfolio of storage arrays and solutions primarily positioned to address opportunities in media and entertainment, rich media, video streaming, file sync and share, postproduction, video surveillance, and backup and archiving. However, only the Promise VTrak E5000, VTrak Ex30 RAID and VTrak A-Class Shared SAN storage appliance offerings qualify as general-purpose storage arrays. With design emphasis on performance associated with large blocks and ease of use, the VTrak E5000 and VTrak Ex30 RAID platforms are basic storage arrays that support a SAN or direct-attached storage (DAS) infrastructure. Conforming to a scale-out architecture, the VTrak A-Class platform incorporates a high-bandwidth and low-latency file system, VTrakFS, to support file sharing applications. With a global presence, Promise has advanced research centers, sales, marketing and service/support personnel located in the U.S., China and its home country, Taiwan. Promise reaches the end-user community via a wide array of reseller and OEM partners.
STRENGTHS
  • As a provider of general-purpose storage arrays, Promise has exhibited staying power in a highly competitive market by focusing on the requirements associated with media and entertainment, postproduction, and video surveillance deployments.
  • Promise provides timely certification for the latest versions of the macOS operating system versions, as well as support for asymmetric logical unit access (ALUA) to enhance availability and load balancing in a SAN infrastructure.
  • The VTrak A-Class Shared SAN storage appliance supports up to eight VTrak E5000 or VTrak Ex30 RAID nodes, enabling capacity to scale to 7 PBs.
CAUTIONS
  • Autotiering, thin provisioning, QoS, multitenancy and data reduction are missing features in the VTrak E5000 and VTrak Ex30 RAID platforms.
  • Promise's client support infrastructure is a reactive model that is unable to predict potential issues concerning performance and capacity, or perform online root cause analysis.
  • Promise relies on a rather shortlist of Tier 1 channel partners and system integrators for its financial well-being.

Quantum

Quantum's storage portfolio consists of the QXS hybrid storage system, namely QXS-3 series, QXS-4 series and QXS-6 series products that are targeted at general-purpose workloads as well as vertical-specific use cases. The QXS system also supports data services such as tiering, snapshots, local mirroring and replication capabilities, but lacks data reduction technologies such as compression and deduplication. The QXS Q-Tools provide storage management capabilities such as caching and thin provisioning. QXS supports a broad range of hypervisors and integrates with open-source software platforms such as OpenStack.
Quantum sells all of its storage systems via channel partnerships. Postsales support is addressed by support staff stationed in multiple geographies that mostly address Level 1 and Level 2 cases, while Level 3 issues are addressed by dedicated engineering teams located in Quantum's headquarters. In the past few years, Quantum has increasingly focused on delivering vertical-specific solutions. It has established partnerships with video surveillance software providers and media and content production vendors, and has delivered solutions that address these specific use cases.
STRENGTHS
  • The QXS series offers responsive and granular tiering features.
  • QXS offers highly resilient array controllers and enclosures from a heat, dust and vibration perspective, which meet both NEBS and MIL-SPEC criteria.
  • Quantum has a unique focus on verticals such as media content production and video surveillance, with well-established partnerships with vendors in these verticals.
CAUTIONS
  • The lack of support for file protocols decreases QXS appeal in the midsize enterprise segment, which increasingly prefers unified storage solutions.
  • Additional licenses are required for snapshots, replication and tiering.
  • Performance and capacity does not scale linearly as the QXS series does not support a scale-out architecture or a distributed file system.

Synology

Taiwan-based storage vendor Synology primarily sells NAS solutions to the midmarket segment. Although the Synology XS+/XS series is mainly deployed for file storage, the same platform can also be used to host the vendor's proprietary email server, sync and share server, backup server, and a video surveillance system, each of which can be enabled by downloading separate software plug-ins. The platform also supports iSCSI, thereby providing small or midsize businesses with a low-cost optimized option for traditional SAN storage. Synology offers its customers the flexibility to choose the hard drive type and capacity for each of its storage products and does not bundle its HDDs with the storage array models. Its products are certified with a wide variety of virtualization and cloud platforms, such as VMware, Microsoft, Docker and OpenStack, as well all major public cloud vendors. All essential data services, such as compression, snapshots and replication, and file and block protocol support, are bundled as part of the base software, which is free of charge. Synology has a rich ecosystem of channel partners to address all major markets, and technical support staff cover all major geographies.
STRENGTHS
  • Synology DiskStation Manager (DSM) is feature-rich and updated at a regular cadence.
  • Synology has a strong presence and mind share in the entry-level and midsize NAS market.
  • The Synology arrays integrate with all major public cloud vendors for use cases such as file sync and share and archiving.
CAUTIONS
  • Synology has a limited presence and mind share in the enterprise storage market.
  • Its storage array platforms can perform a multitude of server and storage roles, thus creating potential confusion during the buying cycle.
  • Synology storage arrays lack scale-out capabilities.

Tegile

Tegile is a vendor that often leads in the ability to implement and offer new storage technologies, purchase methods, guarantees and features. In August 2017, Western Digital announced that it would acquire Tegile, which would lead to more financial, technology and go-to-market resources; however, organizational integration plans are not available and, therefore, the success of this acquisition and completion of this acquisition cannot yet be determined. Tegile is now working on adding a new tier of persistent storage memory in the controller for hybrid storage arrays to improve performance while maintaining low hybrid storage array purchase costs. Due to the storage software being media-independent with no proprietary hardware, Tegile offers unified storage arrays with most protocols and storage media, and can transition to new technologies quickly. Support is provided globally in the Asia/Pacific region, EMEA and the U.S. Tegile's use of an indirect channel sales model is intended to lower costs thereby enabling customers to buy on-premises private data center storage at public cloud cost. Tegile continues to grow and gain new customers within the hybrid storage array market, but sales of its SSAs are overtaking the HDD-based hybrid arrays. However, since Tegile uses the same storage software in hybrid arrays and SSAs, the hybrid arrays and SSAs have the same administrative GUI and can replicate to each other, and customer migrations from and to SSAs and hybrid arrays are simple.
STRENGTHS
  • Tegile has demonstrated an ability to adapt quickly and implement new features, purchase methods (such as all-inclusive storage software features) and monthly storage subscription charges.
  • Tegile now has large 1 PB, high-availability scale-out arrays that have proven real-world 1,000,000 IOPS capabilities.
  • Tegile's arrays are modern designs that have all the features that incumbent arrays have plus more, such as compression and deduplication for hybrid HDD/SSD arrays.
CAUTIONS
  • Most of the vendor's growth is from the solid-state/flash products, not from the HDD or hybrid arrays.
  • Western Digital will have to make significant investments in marketing, sales and support capabilities to grow Tegile's customer base without negatively impacting customer satisfaction.
  • Synchronous replication is still not available with the Tegile hybrid arrays.

Tintri

After growing its installed base to over 1,400 customers and 4,300 systems, Tintri became a publicly traded company in June 2017. The money raised by this IPO will be invested in marketing, sales and R&D. The Tintri T800 Series of fully autonomous hybrid storage arrays is targeted at customers that do not want to manage or administer their storage arrays or are considering HCIS solutions. Tintri's decision to provision only VMs, rather than physical and VMs, has enabled it to deliver ease of use that is very competitive with HCIS solutions, and more prescriptive in tailoring performance to individual VM needs. The arrays only require monitoring, not storage provisioning or configuration. All Tintri arrays provide in-line compression and deduplication across all tiers of storage. This is especially valuable in VDI and dev/test environments.
Tintri Global Center (TGC) Standard is a no-charge management tool that can manage up to 64 systems from a single management console. Features such as VM-based policy management, QoS automation, and systemwide real-time storage and server analytics provide the ability to manage and meet service levels, even as systemwide server and storage performance change. TGC Advanced with VM Scale-out software is an extra chargeable item that creates a federated pool of storage that can scale up to 40 PBs and 480,000 VMs with accelerated storage live migration through array offloading. Analytics and modeling provide recommendations to storage administrators that minimize the cost of meeting service-level objectives. Tintri Cloud Connector allows workloads to be protected in Amazon S3 or IBM Cloud Object Storage, in addition to Tintri replication and snapshots.
STRENGTHS
  • Tintri's status as a publicly traded storage company differentiates it from privately held storage companies and will give it more credibility in many competitive situations.
  • TGC and VM Scale-out software improves T800 Series' attractiveness in small, midsize and large opportunities by simplifying provisioning and optimizing costs.
  • Tintri has a diversified hybrid and SSA product portfolio running the same OS and management platform, with superb ease of use and competitive compression and deduplication features.
CAUTIONS
  • Tintri's IPO generated less cash than anticipated, which could adversely affect its growth plans and ability to reach profitability.
  • Tintri's decision to only support VMs via NFS or SMB3 file protocol access, and the limited scalability of its T800 Series arrays, forces customers that have not fully virtualized their infrastructure and/or large-capacity demands to pursue a dual vendor storage strategy.
  • The inclusion of more storage features in hypervisors and hyperconverged systems competes directly with Tintri's value proposition.

Vendors Added and Dropped

We review and adjust our inclusion criteria for Magic Quadrants as markets change. As a result of these adjustments, the mix of vendors in any Magic Quadrant may change over time. A vendor's appearance in a Magic Quadrant one year and not the next does not necessarily indicate that we have changed our opinion of that vendor. It may be a reflection of a change in the market and, therefore, changed evaluation criteria, or of a change of focus by that vendor.

Added

  • Lenovo
  • Synology

Dropped

  • Dell Technologies (now integrated into Dell EMC)
  • Nimble Storage (acquired by HPE)
  • X-IO Technologies

Inclusion and Exclusion Criteria

The criteria enumerated below apply to both established and emerging vendors alike selling midrange and high-end general-purpose storage systems that support block, file, or both block and file protocols. Commonly supported protocols include FC, iSCSI, SMB (aka CIFS) and NFS. Less commonly used, but still qualifying, protocols include FCoE and InfiniBand. These systems are optionally configured with a mix of HDDs and/or SSDs.
Product Criteria:
  • Bundled all the hardware and software needed to store and retrieve data using industry-standard block and/or file host connection protocols into a storage array
  • Implemented architectures with no single points of hardware failure
  • Sold system through indirect or OEM channels, maintained brand awareness with end users, and had an average selling price of more than $24,999
Vendor Criteria:
  • Annual company revenue of $50 million or more
  • A multinational presence and 24/7 support capabilities
Notes:
  • Inclusion of dual-controller, scale-out and high-end storage systems in the same Magic Quadrant does not imply that the differences in usable availability, scalability, performance/throughput and functionality in these different architectural approaches are insignificant.

Evaluation Criteria

Ability to Execute

The Ability to Execute axis highlights the change in vendor positioning directly attributable to vendor actions. Criteria that provide relatively high levels of vendor and product differentiation are more highly weighted than those that have relatively little ability to provide differentiation.
Table 1.   Ability to Execute Evaluation Criteria
Evaluation Criteria
Weighting
Product or Service
High
Overall Viability
Medium
Sales Execution/Pricing
High
Market Responsiveness/Record
Medium
Marketing Execution
High
Customer Experience
High
Operations
Medium
Source: Gartner (October 2017)

Completeness of Vision

The Completeness of Vision axis highlights the change in vendor positioning directly attributable to vendor actions. Criteria that provide relatively high levels of vendor and product differentiation are more highly weighted than those that have relatively little ability to provide differentiation.
Table 2.   Completeness of Vision Evaluation Criteria
Evaluation Criteria
Weighting
Market Understanding
Low
Marketing Strategy
Medium
Sales Strategy
High
Offering (Product) Strategy
High
Business Model
High
Vertical/Industry Strategy
Medium
Innovation
High
Geographic Strategy
Low
Source: Gartner (October 2017)

Quadrant Descriptions

Leaders

Vendors in the Leaders quadrant have the highest composite scores for their Ability to Execute and Completeness of Vision. A vendor in the Leaders quadrant has the market share, credibility, and marketing and sales capabilities needed to drive the acceptance of new technologies. These vendors demonstrate a clear understanding of market needs, they are innovators and thought leaders, and they have well-articulated plans that customers and prospects can use when designing their storage infrastructures and strategies. In addition, they have a presence in the five major geographical regions, consistent financial performance and broad platform support.

Challengers

A vendor in the Challengers quadrant participates in the broad general-purpose disk array market and executes well enough to be a serious threat to vendors in the Leaders quadrant. Challengers have strong products, as well as a sufficiently credible market position and resources to sustain continued growth. Financial viability is not an issue for vendors in the Challengers quadrant, but they lack the size and influence of vendors in the Leaders quadrant.

Visionaries

A vendor in the Visionaries quadrant delivers innovative products that address operationally or financially important end-user problems on a broad scale, but has not yet demonstrated the ability to capture market share or sustainable profitability. Visionary vendors are frequently privately held companies and acquisition targets for larger, established companies. The likelihood of acquisition often reduces the real versus perceived risks associated with installing their systems.

Niche Players

Vendors in the Niche Players quadrant are often narrowly focused on specific market or vertical segments, such as data warehousing, HPC, low-cost disk-based data retention and other areas that are generally underpenetrated by the larger disk array vendors. This quadrant may also include vendors that are ramping up their disk array offerings, or larger vendors that are having difficulty developing and executing on their vision.

Context

This Magic Quadrant represents vendors that sell into the end-user market with branded disk and hybrid arrays. These arrays may be internally developed, or acquired through an acquisition or OEM agreement. Tight budgets and skills shortages have caused vendors and users to focus on technologies and features that lower acquisition and ownership costs while improving performance and throughput. This has resulted in thin-provisioning, autotiering, hybrid configurations (Flash and HDDs) and near-autonomic operation becoming ubiquitous general-purpose disk arrays. It is also driving the deployment of SSAs into I/O-intensive environments and creating opportunities for emerging storage companies that can refactor infrastructure designs to obtain incremental improvements in performance, economics and staff productivity. Examples include HCIS, SDS and cloud gateways that make it practical to implement hybrid on-premises/public clouds. Concerns with security exposures and meeting ever more stringent regulatory requirements are now making self-encrypting disks (SEDs) generally available.

Market Overview

The general-purpose disk array market is declining on a revenue and unit basis, even as capacity shipped continues to grow. This has made vendors ever more aggressive and innovative as they attempt to grow market share and expand into tangential markets, such as HCIS and hybrid cloud. Customer satisfaction is high, with 77% of customers completely satisfied and less than 6% unsatisfied with their general-purpose disk array, per reference checks conducted for this research. Visionary vendors such as Infinidat, Tegile (now part of Western Digital), Tintri and Nimble Storage (now part of HPE) improve the customer experience and maintain pressure on the incumbent vendors with their new offerings, extensive features, and easy-to-use/purchase and support storage arrays. Not surprisingly, storage connection protocol usage remains essentially unchanged, with the top three used by customers being FC (47%), iSCSI (15%) and NAS (23%), with 15% using other protocols, such as FCoE, InfiniBand, etc. The virtualization of more than 80% of user applications and improvements in technology have led users to treat high-end, midrange and NAS systems as roughly equivalent. This practical parity, coupled with tight budgets, insatiable storage demand and improved disaster recovery capabilities, has led many users to allow them to compete against each other — even in business-critical environments.
Emerging storage vendors — particularly those in the Visionaries quadrant — are indirectly influencing the market by using their innovation to influence large established storage vendors. Many large established storage and portfolio vendors are using these emerging storage companies as their primary source of product innovation.
Gartner expects that the advantages of traditional high-end enterprise storage arrays will continue to disappear over the next three to five years as scale-out storage arrays, integrated platforms and infrastructure SDS gain maturity, and market and mind share. However, we do not see the midrange and high-end market segments collapsing into a single market because of prior investments in troubleshooting capabilities and compatibility testing.

Evaluation Criteria Definitions

Ability to Execute

Product/Service: Core goods and services offered by the vendor for the defined market. This includes current product/service capabilities, quality, feature sets, skills and so on, whether offered natively or through OEM agreements/partnerships as defined in the market definition and detailed in the subcriteria.
Overall Viability: Viability includes an assessment of the overall organization's financial health, the financial and practical success of the business unit, and the likelihood that the individual business unit will continue investing in the product, will continue offering the product and will advance the state of the art within the organization's portfolio of products.
Sales Execution/Pricing: The vendor's capabilities in all presales activities and the structure that supports them. This includes deal management, pricing and negotiation, presales support, and the overall effectiveness of the sales channel.
Market Responsiveness/Record: Ability to respond, change direction, be flexible and achieve competitive success as opportunities develop, competitors act, customer needs evolve and market dynamics change. This criterion also considers the vendor's history of responsiveness.
Marketing Execution: The clarity, quality, creativity and efficacy of programs designed to deliver the organization's message to influence the market, promote the brand and business, increase awareness of the products, and establish a positive identification with the product/brand and organization in the minds of buyers. This "mind share" can be driven by a combination of publicity, promotional initiatives, thought leadership, word of mouth and sales activities.
Customer Experience: Relationships, products and services/programs that enable clients to be successful with the products evaluated. Specifically, this includes the ways customers receive technical support or account support. This can also include ancillary tools, customer support programs (and the quality thereof), availability of user groups, service-level agreements and so on.
Operations: The ability of the organization to meet its goals and commitments. Factors include the quality of the organizational structure, including skills, experiences, programs, systems and other vehicles that enable the organization to operate effectively and efficiently on an ongoing basis.

Completeness of Vision

Market Understanding: Ability of the vendor to understand buyers' wants and needs and to translate those into products and services. Vendors that show the highest degree of vision listen to and understand buyers' wants and needs, and can shape or enhance those with their added vision.
Marketing Strategy: A clear, differentiated set of messages consistently communicated throughout the organization and externalized through the website, advertising, customer programs and positioning statements.
Sales Strategy: The strategy for selling products that uses the appropriate network of direct and indirect sales, marketing, service, and communication affiliates that extend the scope and depth of market reach, skills, expertise, technologies, services and the customer base.
Offering (Product) Strategy: The vendor's approach to product development and delivery that emphasizes differentiation, functionality, methodology and feature sets as they map to current and future requirements.
Business Model: The soundness and logic of the vendor's underlying business proposition.
Vertical/Industry Strategy: The vendor's strategy to direct resources, skills and offerings to meet the specific needs of individual market segments, including vertical markets.
Innovation: Direct, related, complementary and synergistic layouts of resources, expertise or capital for investment, consolidation, defensive or pre-emptive purposes.
Geographic Strategy: The vendor's strategy to direct resources, skills and offerings to meet the specific needs of geographies outside the "home" or native geography, either directly or through partners, channels and subsidiaries as appropriate for that geography and market.

Gartner Top 10 Strategic Technology Trends for 2018

$
0
0










Gartner Top 10 Strategic Technology Trends for 2018 AI, intelligent apps, intelligent things


Artificial intelligence, immersive experiences, digital twins, event-thinking and continuous adaptive security create a foundation for the next generation of digital business models and ecosystems.
LiveSYM_bugHow do designers make cars safer? They treat them like a school of fish. Safe Swarm, recently unveiled by Honda, uses vehicle-to-vehicle communication to allow cars to pass information on to other cars in the vicinity. For example, alerts about an accident miles up the road could be relayed to cars miles back, enabling them to operate collaboratively and intelligently to avoid accidents and mitigate traffic.
The evolution of intelligent things, such as collective thinking car swarms, is one of 10 strategic trends with broad industry impact and significant potential for disruption.
“The continuing digital business evolution exploits new digital models to align more closely the physical and digital worlds for employees, partners and customers,” says David Cearley, vice president and Gartner Fellow, at Gartner 2017 Symposium/ITxpo in Orlando, Florida. “Technology will be embedded in everything in the digital business of the future.”

The Intelligent Digital Mesh

Gartner calls the entwining of people, devices, content and services the intelligent digital mesh. It’s enabled by digital models, business platforms and a rich, intelligent set of services to support digital business.
Intelligent: How AI is seeping into virtually every technology and with a defined, well-scoped focus can allow more dynamic, flexible and potentially autonomous systems.
Digital: Blending the virtual and real worlds to create an immersive digitally enhanced and connected environment.
Mesh: The connections between an expanding set of people, business, devices, content and services to deliver digital outcomes.
Trend No. 1: AI Foundation
The ability to use AI to enhance decision making, reinvent business models and ecosystems, and remake the customer experience will drive the payoff for digital initiatives through 2025.
Given the steady increase in inquiry calls, it’s clear that interest is growing. A recent Gartner survey showed that 59% of organizations are still gathering information to build their AI strategies, while the remainder have already made progress in piloting or adopting AI solutions.
Although using AI correctly will result in a big digital business payoff, the promise (and pitfalls) of general AI where systems magically perform any intellectual task that a human can do and dynamically learn much as humans do is speculative at best. Narrow AI, consisting of highly scoped machine-learning solutions that target a specific task (such as understanding language or driving a vehicle in a controlled environment) with algorithms chosen that are optimized for that task, is where the action is today. “Enterprises should focus on business results enabled by applications that exploit narrow AI technologies and leave general AI to the researchers and science fiction writers,” says Cearley.
Trend No. 2: Intelligent Apps and AnalyticsOver the next few years every app, application and service will incorporate AI at some level. AI will run unobtrusively in the background of many familiar application categories while giving rise to entirely new ones. AI has become the next major battleground in a wide range of software and service markets, including aspects of ERP. “Challenge your packaged software and service providers to outline how they’ll be using AI to add business value in new versions in the form of advanced analytics, intelligent processes and advanced user experiences,” notes Cearley.
Intelligent apps also create a new intelligent intermediary layer between people and systems and have the potential to transform the nature of work and the structure of the workplace, as seen in virtual customer assistants and enterprise advisors and assistants.  
“Explore intelligent apps as a way of augmenting human activity, and not simply as a way of replacing people,” says Cearley. Augmented analytics is a particularly strategic growing area that uses machine learning for automating data preparation, insight discovery and insight sharing for a broad range of business users, operational workers and citizen data scientists.
Trend No. 3: Intelligent ThingsIntelligent things use AI and machine learning to interact in a more intelligent way with people and surroundings. Some intelligent things wouldn’t exist without AI, but others are existing things (i.e., a camera) that AI makes intelligent (i.e., a smart camera.) These things operate semiautonomously or autonomously in an unsupervised environment for a set amount of time to complete a particular task. Examples include a self-directing vacuum or autonomous farming vehicle. As the technology develops, AI and machine learning will increasingly appear in a variety of objects ranging from smart healthcare equipment to autonomous harvesting robots for farms.
As intelligent things proliferate, expect a shift from stand-alone intelligent things to a swarm of collaborative intelligent things. In this model, multiple devices will work together, either independently or with human input. The leading edge of this area is being used by the military, which is studying the use of drone swarms to attack or defend military targets. It’s evident in the consumer world in the opening example showcased at CES, the consumer electronics event.

Digital

Trend No. 4: Digital TwinsA digital twin is a digital representation of a real-world entity or system. In the context of IoT, digital twins are linked to real-world objects and offer information on the state of the counterparts, respond to changes, improve operations and add value. With an estimated 21 billion connected sensors and endpoints by 2020, digital twins will exist for billions of things in the near future. Potentially billions of dollars of savings in maintenance repair and operation (MRO) and optimized IoT asset performance are on the table, says Cearley.
In the short term, digital twins offer help with asset management, but will eventually offer value in operational efficiency and insights into how products are used and how they can be improved.
Outside of the IoT, there is a growing potential to link digital twins to entities that are not simply “things.” “Over time, digital representations of virtually every aspect of our world will be connected dynamically with their real-world counterparts and with one another and infused with AI-based capabilities to enable advanced simulation, operation and analysis,” says Cearley. “City planners, digital marketers, healthcare professionals and industrial planners will all benefit from this long-term shift to the integrated digital twin world.” For example, future models of humans could offer biometric and medical data, and digital twins for entire cities will allow for advanced simulations.
Trend No. 5: Cloud to the Edge
Edge computing describes a computing topology in which information processing and content collection and delivery are placed closer to the sources of this information. Connectivity and latency challenges, bandwidth constraints and greater functionality embedded at the edge favors distributed models. Enterprises should begin using edge design patterns in their infrastructure architectures  particularly for those with significant IoT elements. A good starting point could be using colocation and edge-specific networking capabilities.
While it’s common to assume that cloud and edge computing are competing approaches, it’s a fundamental misunderstanding of the concepts. Edge computing speaks to a computing topology that places content, computing and processing closer to the user/things or “edge” of the networking. Cloud is a system where technology services are delivered using internet technologies, but it does not dictate centralized or decentralized service delivering services. When implemented together, cloud is used to create the service-oriented model and edge computing offers a delivery style that allows for executions of disconnected aspects of cloud service.
Trend No. 6: Conversational Platforms
Conversational platforms will drive a paradigm shift in which the burden of translating intent shifts from user to computer. These systems are capable of simple answers (How’s the weather?) or more complicated interactions (book a reservation at the Italian restaurant on Parker Ave.) These platforms will continue to evolve to even more complex actions, such as collecting oral testimony from crime witnesses and acting on that information by creating a sketch of the suspect’s face based on the testimony. The challenge that conversational platforms face is that users must communicate in a very structured way, and this is often a frustrating experience. A primary differentiator among conversational platforms will be the robustness of their conversational models and the API and event models used to access, invoke and orchestrate third-party services to deliver complex outcomes.
Trend No. 7: Immersive ExperienceAugmented reality (AR), virtual reality (VR) and mixed reality are changing the way that people perceive and interact with the digital world. Combined with conversational platforms, a fundamental shift in the user experience to an invisible and immersive experience will emerge. Application vendors, system software vendors and development platform vendors will all compete to deliver this model.
Over the next five years the focus will be on mixed reality, which is emerging as the immersive experience of choice, where the user interacts with digital and real-world objects while maintaining a presence in the physical world. Mixed reality exists along a spectrum and includes head-mounted displays (HMD) for AR or VR, as well as smartphone- and tablet-based AR. Given the ubiquity of mobile devices, Apple’s release of ARkit and iPhone X, Google’s Tango and ARCore, and the availability of cross-platform AR software development kits such as Wikitude, we expect the battles for smartphone-based AR and MR to heat up in 2018.





avid Cearley, vice president and Gartner Fellow, at Gartner 2017 Symposium/ITxpo in Orlando, Florida.
David Cearley, vice president and Gartner Fellow, discusses the Top Strategic Technology Trends 2018 at Gartner 2017 Symposium/ITxpo in Orlando, Florida.

Mesh

Trend No. 8: BlockchainBlockchain is a shared, distributed, decentralized and tokenized ledger that removes business friction by being independent of individual applications or participants. It allows untrusted parties to exchange commercial transactions. The technology holds the promise to change industries, and although the conversation often surrounds financial opportunities, blockchain has many potential applications in government, healthcare, content distribution, supply chain and more. However, many blockchain technologies are immature and unproven, and are largely unregulated.
A practical approach to blockchain demands a clear understanding of the business opportunity, the capabilities and limitations of blockchain, a trust architecture and the necessary implementation skills. Before embarking on a distributed-ledger project, ensure your team has the cryptographic skills to understand what is and isn’t possible. Identify the integration points with existing infrastructures, and monitor the platform evolution and maturation. Use extreme caution when interacting with vendors, and ensure you are clearly identifying how the term “blockchain” is being used.
Trend No. 9: Event-DrivenDigital businesses rely on the ability to sense and be ready to exploit new digital business moments. Business events reflect the discovery of notable states or state changes, such as completion of a purchase order. Some business events or combinations of events constitute business moments — a detected situation that calls for some specific business action. The most consequential business moments are those that have implications for multiple parties, such as separate applications, lines of business or partners.  
With the advent of AI, the IoT, and other technologies, business events can be detected more quickly and analyzed in greater detail. Enterprises should embrace “event thinking” as part of a digital business strategy. By 2020, event-sourced, real-time situational awareness will be a required characteristic for 80% of digital business solutions, and 80% of new business ecosystems will require support for event processing.
Trend No. 10: Continuous Adaptive Risk and TrustDigital business creates a complex, evolving security environment. The use of increasingly sophisticated tools increases the threat potential. Continuous adaptive risk and trust assessment (CARTA) allows for real-time, risk and trust-based decision making with adaptive responses to security-enable digital business. Traditional security techniques using ownership and control rather than trust will not work in the digital world. Infrastructure and perimeter protection won’t ensure accurate detection and can’t protect against behind-the-perimeter insider attacks. This requires embracing people-centric security and empowering developers to take responsibility for security measures. Integrating security into your DevOps efforts to deliver a continuous “DevSecOps” process and exploring deception technologies (e.g., adaptive honeypots) to catch bad guys that have penetrated your network are two of the new techniques that should be explored to make CARTA a reality.
source: https://www.gartner.com/smarterwithgartner/gartner-top-10-strategic-technology-trends-for-2018/

AKCP SP2 untuk monitoring tombol industrial

$
0
0
AKCP bisa digunakan untuk memonitor TOMBOL-TOMBOL yang digunakan di berbagai area di pabrik Anda. Dan kondisi ini bisa dimonitor oleh perangkat AKCP.

Berikut contoh implementasi AKCP SP2 dengan memonitor tombol OFF-ON yang otomatis menyalakan lampu fungsi dan dry-contact yang terkoneksi ke tombol juga.

Weir Minerals use AKCP SP2 for industrial monitoring


AKCP provided 30 sensorProbe2 devices with dry contacts to be integrated into the “Andon” station network in the Weir Minerals Manufacturing facility in Madison, Wisconsin, USA. Andon systems are one of the tools Weir Minerals use as part of their “lean manufacturing” process that helps to identify and resolve manufacturing support issues. The objective of this system is to resolve any conditions that are inhibiting production as rapidly as possible.
Weir Minerals use AKCP SP2 for industrial monitoring
Weir Minerals use AKCP SP2 for industrial monitoring
The system at Weir has 30 stations, each consisting of an SP2 device, switch box and light stack. The switches are connected to the light stack, and the machine operator turns the switch to illuminate the appropriate light should a problem arise. This also triggers as associated dry contact connected to the SP2 device. The SP2 then triggers the response team members designated to address that specific condition via e-mail and SMS text messaging, alerting them to the specific location in the factory.
Weir Minerals use AKCP SP2 for industrial monitoring

Forrester's 10 Cloud Computing Predictions For 2018

$
0
0


ThinkStockPhoto
These and many other fascinating insights are from Forrester’s Predictions 2018: Cloud Computing Accelerates Enterprise Transformation Everywhere(PDF, 12 pp., client access reqd). Forrester’s predictions reflect the growing dominance of cloud application and development platforms and their role in revolutionizing new business models across enterprises today. Forrester clients no longer questioning whether the cloud is right for their business — they now scramble to decide how soon and how much according to the study.
Key takeaways from Forrester’s 10 Cloud Computing predictions for 2018 include the following:
  • The total global public cloud market will be $178B in 2018, up from $146B in 2017, and will continue to grow at a 22% compound annual growth rate (CAGR). Public cloud platforms, the fastest growing segment, will generate $44 billion in 2018. By the end of 2017, Forrester expects that half of all global enterprises will rely on public cloud platforms.
  • AWS, Google, and Microsoft will capture 76% of all cloud platform revenue in 2018; 80% by 2020.Microsoft, Oracle, and Salesforce together have a 70% share of all SaaS sales force automation and customer service subscription revenue.
  • Forrester predicts SaaS vendors will compete more at the platform level, running portions of their services on Amazon AWS, Microsoft Azure, Google Cloud Platform (GCP) or Oracle Cloud in 2018. Given the increased demands for application customization, combined with the convergence of digital technologies such as IoT and AI, Forrester is predicting that SaaS vendors will de-prioritize their platform efforts to attain global scale and select from AWS, Azure, GCP, or Oracle Cloud. Salesforce is emphasizing platform capabilities that enable Artifical Intelligence (AI), advanced development of its Lightning platform, Cloud, and their rapidly evolving Einstein solution. Like Salesforce, Workday is investing in its platform with greater depth of features. Both companies need a broad public cloud platform capable of scaling fast to support global deployments, which is one of the primary factors that led Salesforce and Workday to choose Amazon AWS to scale global deployments. Forrester predicts this trend will accelerate with other SaaS vendors in 2018.

The following graphic lists Forrester’s 2018 cloud computing predictions for 2018:
Forrester’s Predictions 2018: Cloud Computing Accelerates Enterprise Transformation Everywhere
Sources:
source: https://www.forbes.com/sites/louiscolumbus/2017/11/07/forresters-10-cloud-computing-predictions-for-2018/

Siapkan bisnis anda di gelombang e-commerce

$
0
0
Seiring dengan perkembangan ekonomi digital yang sedang dikebut pemerintah kita saat ini, maka sudah saatnya bisnis anda juga harus mempertimbangkan untuk ikut gelombang di dalamnya.
Bagaimana caranya? Pertama, bergabunglah ke dalam platform e-commerce yang jelas banyak trafiknya. Mengapa harus demikian ? Membuat website sendiri adalah baik, dan membuat website sendiri yang memiliki kemampuan e-commerce juga baik. Tapi trafik belum tentu datang ke website kita. Peluang bisnis akan lebih mudah terjadi di website platform e-commerce yang besar. Sehingga kita harus ada di dalamnya.
Dari sepanjang 2017, maka platform e-commerce yang paling menjanjikan untuk bisnis anda berikut ini datanya. IPrice melihatnya pertama berdasarkan platform e-commerce yang paling banyak dicari. Jadi konsentrasikan effort anda ke 5 platform ini.

Kemudian, data ini juga mengarah kepada platform yang paling banyak diakses sepanjang 2017.

Tidak lupa juga, sebagian besar pengakses e-commerce platform mengakses melalui website dan aplikasi. Berdasarkan apps yang digunakan.

Demikian juga di IOSStore

Kemudahan akses dan bertransaksi dengan apps juga sangat penting rupanya.
Kemudian para platform e-commerce ini juga sangat aktif di sosial media. Dimana rata-rata orang Indonesia menghabiskan waktu 16 menit per hari. Dan semuanya berusaha aktif dan mengedukasi masyarakat via sosial media.
Pertanyaan selanjutnya, bagaimana memulai ? Anda tinggal login dan mendaftar di platform e-commerce yang ada di atas, bisa fokus ke 5 platform itu dulu. Kemudian siapkan katalog (informasi produk, gambar, harga jual) dari produk-produk yang akan anda jual. Fokuskan kepada produk yang paling menjanjikan dan kemungkinan banyak dicari orang.
Alihkan budget maintain website e-commerce anda sendiri kepada budget untuk merekrut orang mengelola di platform e-commerce. Mengapa begini? Dengan budget 1 orang UMR, cukup untuk mengelola hingga 5 platform yang ada. Ada beberapa kawan yang semula memiliki banyak toko offline, sekarang cukup memiliki 1 toko saja, tapi tetap memiliki karyawan yang banyak, untuk mengelola trafik sales dari e-commerce platform. Bahkan ada kawan saya yang mengelola hingga 11 platform e-commerce, dan itu hanya perlu 2-3 orang saja. Orang yang sama juga mengelola interaksi dengan customer, baik melalui platform e-commerce, ataupun email dan chat yang mereka sediakan.
Lakukan review selama beberapa bulan, dan fokuskan effort anda di platform e-commerce yang paling menguntungkan. Banyak platform memiliki tawaran menarik. Mulai dari free-shipping, bonus dan lain sebagainya. Ingat, memang bila berjualan via e-commerce platform yang ada, keuntungan bisa tipis. Jadi jaga dengan baik margin kita, dengan beragam cara.
Berikutnya tetap lakukan edukasi. Orang akan kembali ke toko online anda karena layanan yang baik, dan tentu edukasi atas produk, serta dukungan teknis yang baik. Jadi tetap kita harus mengelola komplain yang ada, via email dan chat. Usahakan gunakan sistem helpdesk agar kita bisa mengelola komplain yang ada.
Semua edukasi produk, teknologi dapat tetap kita masukkan dalam website kita sendiri , dan blog yang ada. Gunakan juga blog yang mudah diakses seperti dari blogger.
Dan terakhir, masukkan customer voice. Semua komen, rating, pendapat dari happy customer akan menaikkan trafik penjualan kita. Fokuskan kepada customer voice, maka produk anda akan semakin menarik banyak orang.
Pastikan tahun ini anda ikut gelombang e-commerce Indonesia.
Fanky Christian - Waketum APOI (Asosiasi Pebisnis Online Indonesia) - Ketua DPD DKI APTIKNAS (Asosiasi Pengusaha TIK Nasional) - Waketum ASISINDO (Asosiasi Sistem Integrator & Sekuriti Indonesia)
sumber lain: https://www.digitalnewsasia.com/digital-economy/2017-e-commerce-review-indonesia

Tahun 2018 Lebih Optimistis?

$
0
0
Sharing pagi hari dan sedikit ngotak...

*Tahun 2018 Lebih Optimistis?*
Rabu, 6 December 2017

Penulis: *Muhamad Chatib Basri* Dosen Fakultas Ekonomi dan Bisnis Universitas Indonesia, Menteri Keuangan 2013-2014
 

EKONOM adalah seseorang yang mampu meramal apa yang akan terjadi di masa depan dan kemudian menjelaskan mengapa ramalannya salah dengan cara yang meyakinkan. Lelucon sinis itu mungkin ada benarnya. Membuat ramalan mengenai angka pertumbuhan, nilai tukar, dan berbagai indikator lain apalagi sampai dua angka di belakang koma hanya menunjukkan bahwa seorang ekonom punya selera humor yang baik. Jika begitu, apa gunanya diskusi mengenai prospek ekonomi?

Arah pertumbuhan

Saya kira yang paling penting adalah arah. Bagaimana arah pertumbuhan ekonomi Indonesia pada 2018? Apakah ada alasan untuk lebih optimistis? Jawaban saya ya, dengan beberapa catatan. Apa alasannya? Pertama, pertemuan tahunan IMF/World Bank pada Oktober lalu memperkirakan harga minyak dunia akan berada dalam kisaran U$50-U$60, lebih tinggi daripada kisaran dua tahun terakhir. Studi World Bank menunjukkan kenaikan harga minyak cenderung diikuti kenaikan harga energi nonmigas dan komoditas.

Karena itu, kita bisa berharap harga komoditas dan energi diperkirakan akan masih tetap relatif kuat setidaknya sepanjang tahun depan. Ada beberapa alasan untuk itu. Ekonomi dunia, misalnya, sudah mulai menujukkan tanda-tanda perbaikan. Ekonomi Amerika Serikat di triwulan III 2017 sudah tumbuh 3,3%. Pertumbuhan ekonomi di Eropa juga mulai terjadi.

Di antara negara-negara ASEAN dan Vietnam, kita melihat akselerasi pertumbuhan ekonomi di mana-mana, pada triwulan ketiga, Vietnam tumbuh 7,5%, Filipina 6,9%, Singapura 5,2%, Malaysia 6,2%. Sayangnya, Indonesia masih mandek di 5,06%. Situasi yang lebih optimistis ini juga didukung pertumbuhan ekonomi Tiongkok yang lebih baik daripada perkiraan.

Apa implikasinya bagi Indonesia? Karena 60% dari ekspor Indonesia adalah ekspor yang memiliki kaitan dengan energi dan komoditas, kenaikan harga ini mendorong peningkatan pertumbuhan ekspor. Itu sebabnya pertumbuhan ekspor Indonesia di triwulan III 2017 meningkat dengan tajam. Selain itu, perbaikan ekonomi AS membuat ekspor kita ke negara tersebut mengalami peningkatan.

Kedua, bagaimana dengan konsumsi rumah tangga? Data menunjukkan pertumbuhan konsumsi rumah tangga memang mengalami perlambatan bila dibandingkan dengan periode 2011. Dalam periode awal 2011, konsumsi rumah tangga tumbuh 5,8% dan dalam periode 2015 sampai sekarang konsumsi rumah tangga tumbuh dalam kisaran 5% dan bahkan triwulan III 2017 tumbuh 4,9%. Penyebabnya, berakhirnya boom komoditas dan energi, yang kemudian memukul konsumsi rumah tangga. Lalu orang menyimpulkan daya beli melemah.

Di sisi lain, sebagian orang menganggap bahwa perlambatan ini terjadi karena semakin maraknya bisnis online yang menggeser bisnis ritel konvensional. Data memang menunjukkan pertumbuhan bisnis online dan logistik meningkat luar biasa dalam tiga tahun terakhir. Bagaimana menjelaskan ini? Saya kira kedua argumen ini benar. Data menunjukkan pertumbuhan konsumsi sebenarnya relatif tak bergerak dalam 3 tahun terakhir, tumbuh di kisaran 5%.

Sayangnya, ini tak bisa menjelaskan keseluruhan cerita. Kita harus melihat perilaku di tiap segmentasi konsumen. Karena perilaku konsumen sangat berbeda di tiap segmen. Konsumen bisnis online tentunya akan bergantung kepada komputer dengan internet, dan terutama telepon pintar (smartphone). Harga komputer dan smartphone masih tergolong mahal bagi penduduk Indonesia untuk dimiliki secara pribadi.

Karena itu, ada kemungkinan bahwa penggunanya datang dari kelas menengah ke atas. Selain itu, sistem pembayaran dari bisnis online utamanya menggunakan layanan perbankan, baik itu ATM, kartu debit, kartu kredit, layanan m-banking, ataupun internet banking. Data menunjukkan mereka yang memiliki akses terhadap perbankan di Indonesia masih kurang dari 40%. Dari segi demografi, penggunaan aplikasi smartphone cenderung didominasi kelompok muda. Dari gambaran ini, kita bisa melihat bahwa konsumen dari bisnis online cenderung berasal dari kelompok pendapatan menengah ke atas, perkotaan, dan muda. Itu sebabnya segmentasinya hanya terbatas. Perkiraan nilai e-commerce di Indonesia baru mencapai sekitar 2%-4%.

Namun, bagaimana dengan kelompok menengah ke bawah? Data BPS menunjukkan upah harian (riil) petani mengalami penurunan dari Rp38.955 (Oktober 2014) menjadi Rp37.860 (Oktober 2017), sedangkan upah buruh harian (riil) mengalami penurunan dari Rp67.305 (Oktober 2014) menjadi Rp64.894. Nilai tukar petani tercatat sebesar 102,87 (Oktober 2014) menjadi 102,78 (Oktober 2017). Di sini kita bisa melihat bahwa upah riil kelas bawah mengalami penurunan, sedangkan nilai tukar petani praktis tak berubah.

Data-data ini menunjukkan bahwa daya beli di tingkat menengah bawah mungkin sekali mengalami tekanan. Temuan ini konsisten dengan survei yang dilakukan AC Nielsen, pernyataan Bank Indonesia, dan Kepala Badan Pusat Statistik, yang menunjukkan bahwa kelas menengah bawah mengalami tekanan dalam daya beli.

Sebenarnya hal ini mudah dijelaskan. Kelompok menengah atas umumnya bekerja di sektor modern dan formal, di saat gaji disesuaikan dengan kenaikan inflasi. Selain itu, mereka memiliki pendapatan dari investasi di sektor keuangan seperti saham dan obligasi, yang memang dalam beberapa tahun terakhir mengalami kenaikan.

Sementara itu, mereka yang dari kelas bawah bekerja di sektor informal tak mengalami penyesuaian kenaikan upah sesuai dengan inflasi. Akan tetapi, bukankah pertumbuhan PPN relatif tinggi? Itu cerminan bahwa transaksi terjadi. Benar sekali. Pertumbuhan PPN yang tinggi didorong pertumbuhan transaksi di sektor formal yang masuk cakupan pajak mencerminkan pertumbuhan konsumsi kelas menengah atas.

Sayangnya, pola konsumsi kelas menengah bawah ini tak bisa sepenuhnya dideteksi melalui pertumbuhan PPN. Mengapa? Karena mereka umumnya tidak berbelanja di sektor informal, tetapi warung-warung sederhana atau pedagang pinggir jalan yang tak mengenakan PPN karena tak memiliki NPWP.

Di sini kita bisa kedua argumen itu benar karena memang ada pola yang berbeda. Kelompok menengah atas cenderung mengalami peningkatan pertumbuhan konsumsi, sedangkan yang menengah bawah mengalami kemandekan atau tekanan. Pada kelompok menengah atas, sejalan dengan peningkatan pendapatan, pola konsumsinya berubah dari barang kebutuhan pokok kepada leisure. Pola ini sejalan dengan hukum Engel yang mengatakan ketika pendapatan meningkat, porsi konsumsi kepada kebutuhan pokok akan menurun. Inilah yang menjelaskan mengapa pertumbuhan konsumsi rumah tangga untuk makan dan minuman di luar restoran serta pada kelompok pakaian, alas kaki, dan jasa perawatan dan perumahan serta perlengkapan rumah mengalami perlambatan.

Sementara itu, pertumbuhan konsumsi rumah tangga untuk transportasi, komunikasi, dan restoran serta hotel naik karena konsumsi bergeser ke leisure. Kelompok menengah bawah tentu belum sanggup untuk mengonsumsi leisure karena mereka masih berjuang dalam memenuhi kebutuhan dasar.

Ketiga, ke depan, dengan membaiknya harga komoditas dan energi, bukankah konsumsi akan meningkat juga? Benar, tetapi ia membutuhkan waktu. Mengapa? James Duesenberry, ekonom dari Harvard University, mengajukan sebuah teori yang menarik, walau pendapatan seseorang mengalami penurunan, pola konsumsinya tak akan menurun banyak. Alasannya, orang akan mencoba mempertahankan konsumsinya pada tingkat yang paling tinggi. Mudahnya, walau pendapatan turun, orang tak mudah menurunkan gaya hidupnya.

Duesenberry benar. Karena itu, ketika pendapatan penduduk Indonesia menurun karena anjloknya harga komoditas sejak 2013, mereka mencoba mempertahankan konsumsi dengan menggunakan tabungan. Setelah tabungannya susut, mungkin ia akan mulai meminjam demi mempertahankan konsumsinya. Setelah mereka tak bisa meminjam lagi, barulah orang akan menurunkan konsumsinya.

Inilah yang menjelaskan bahwa ketika pertumbuhan ekonomi kita turun ke kisaran 5% dalam 3 tahun terakhir dari sebelumnya di atas 6% pertumbuhan konsumsi bertahan pada kisaran 5%. Di sisi lain, kita juga melihat pertumbuhan dana pihak ketiga mengalami perlambatan sampai dengan akhir 2016. Namun, sejalan dengan membaiknya harga batu bara dan kelapa sawit, pendapatan masyarakat yang berada di daerah-daerah penghasil komoditas dan batu bara juga mulai meningkat.

Namun, dampaknya tak seketika pada peningkatan konsumsi. Mengapa? Mereka harus melakukan konsolidasi keuangan pribadi dengan mulai membayar utang, meningkatkan tabungan, dsb, sebelum meningkatkan konsumsi. Perbaikan ini baru akan tecermin pada peningkatan konsumsi seharusnya 3-4 triwulan sejak harga sumber daya alam membaik. Jadi dari sisi ini sebenarnya ada harapan bahwa konsumsi rumah tangga akan sedikit membaik tahun depan.

Keempat, bagaimana investasi? Data BPS menunjukkan pola yang menggembirakan. Pada triwulan III 2017, pertumbuhan investasi meningkat menjadi 7,1% (antartahun). Ini lebih tinggi jika dibandingkan dengan pertumbuhan investasi sejak Juni 2013. Jika dilihat secara lebih rinci, pertumbuhan investasi yang terjadi disebabkan pertumbuhan investasi dalam mesin dan peralatan.

Memang yang menimbulkan pertanyaan ialah bahwa pertumbuhan investasi yang meningkat ini tak sejalan dengan data pertumbuhan kredit investasi yang cenderung menurun. Dugaan saya, ekspansi ini lebih didorong investasi pemerintah atau investasi yang dibiayai dari sebagian arus modal yang masuk ke Indonesia. Lepas dari sumbernya, kenaikan investasi ini sejalan dengan meningkatnya pertumbuhan impor, yang sebagian besar didominasi bahan baku dan barang modal.

Jadi meningkatnya pertumbuhan impor sejalan dengan kecenderungan perusahaan untuk melakukan ekspansi usahanya. Hal lain yang saya kira juga mendorong investasi ialah membaiknya iklim usaha, sejalan dengan perbaikan rangking doing business.

Tetap waspada
Dari perspektif ini saya kira Indonesia punya alasan untuk lebih optimistis pada 2018. Dengan gambaran ini, saya memperkirakan pertumbuhan ekonomi ada pada kisaran 5,1%-5,3%. Pertanyaannya ialah apakah ini akan berkesinambungan? Apa risiko yang mungkin mengganggu? Di sini saya kira seberapa jauh pertumbuhan ekonomi akan meningkat bakal masih bergantung kepada konsumsi rumah tangga karena porsinya masih relatif besar.

Jika dampak positif dari kenaikan komoditas dan energi akan terasa di konsumsi rumah tangga, kita berharap pertumbuhan ekonomi akan meningkat. Namun, bila pertumbuhan konsumsi rumah tangga tetap mandek karena belum ada perbaikan daya beli yang signifikan di kelas bawah, ekspansi investasi tak akan berkesinambungan. Alasannya apa gunanya meningkatkan ekspansi usaha jika tak ada permintaan?

Selain itu, jika kenaikan tabungan tak ditransmisikan pada investasi karena kekhawatiran politik menjelang pemilu atau kekhawatiran akan pajak, investasi tak akan meningkat tajam. Hal lain yang perlu diperhatikan tentunya ialah kondisi eksternal. Asumsi dasar dari analisis ini ialah bertahannya harga komoditas dan energi karena ekonomi dunia yang membaik.

Namun, bila ekonomi dunia terganggu, misalnya karena gelembung di pasar keuangan di AS atau gangguan pada perekonomian Tiongkok, kita tampaknya tak bisa seoptimistis itu. Beberapa waktu lalu Jeffrey Frankel, guru besar dari Harvard University, mengingatkan saya mengenai risiko gelembung di pasar modal AS. Ia menyampaikan kekhawatirannya mengenai gelembung yang terjadi di pasar modal di AS. Ia menyarankan supaya sektor keuangan lebih berhati-hati.

Jadi, walau ada alasan untuk lebih optimistis pada 2018, Indonesia tetap harus waspada. Begitu banyak ketidakpastian yang mengadang di depan kita, termasuk risiko geopolitik di Semenanjung Korea. Banyak variabel yang tak bisa sepenuhnya diduga. Karena itu, baik bagi kita untuk mengingat lelucon ini, ekonom adalah seseorang yang mampu membuat prediksi mengenai apa yang akan terjadi di masa depan, dan kemudian mampu menjelaskan mengapa prediksinya salah, dengan cara yang meyakinkan....!

GROWTH, ENGAGEMENT, REVENUE

$
0
0
GROWTH, ENGAGEMENT, REVENUE

Pada saat kita memulai sebuah inovasi (new product development), kadang kadang kita dituntut untuk segera menghasilkan pendapatan (revenue). Padahal kita tahu, akan memerlukan waktu sebelum akhirnya sebuah inovasi menghasilkan. Itupun kalau menghasilkan uang, karena pada saat berinovasi kita kadang kala berhasil, kadang kala gagal (still, you have to innovate anyway, because the risk of not innovating at all is much bigger!).
Jadi gimana dong? 
Well, mari kita simak cerita nyata dari founder dan creator LinkedIn.

Namanya Reid Hoffman, dia adalag founder dan creator LinkedIn, sebuah social media platform yang paling popular untuk para profesional.
Dari awal Reid berpendapat (which I fully agree) bahwa setiap profesional harus menjadi CEO dari kariernya sendiri.
Bahwa setiap profesional harus investasi sendiri untuk Riset and Development bagi kariernya (Continuous Learning), dan mereka harus menginvestasikan waktu, energi dan uang dari sebagian revenue mereka (gaji yang mereka terima setiap bulan).
Maka sebagai CEO (of their ow   areer) mereka juga harus marketer dari kariermereka sendiri.

Di sinilah Reid  melihat peluang tentang kebutuhan adanya sebuah plaform di mana para profesional akan "menjajakan" dirinya (atau tepatnya menjajakan competence mereka), dan di mana para perusahaan akan berusaha menemukan profesional dengan competence yang relevant untuk kebutuhan mereka.

Di sini terlihat bahwa Reid sudah mempunyai ide yang bagus pada awalnya. Namun dia harus menjual idenya dan menawarkan kepada para investor.
Sayangnya pada awalnya semuanya meragukan bahwa ide Reid akan bisa menjadi sebuah bisnis yang menghasilkan revenue yang besar.
Semua mempertanyakan apakah value yang akan diberikan Reid kepada subsribernya dan berapa revenue yang akan dihasilkan.

Reid bersikeras bahwa revenue itu bukan prioritasny sekarang. Dia berpendapata bahwa dalam bisnisnya ketiga tahapan yang penting adalah Growth, Engagement dan Revenue.
Artinya dia harus mencari customer atau subscriber sebanyak banyaknya dulu, membuat mereka mempunyai keterikatan emosional yang tinggi dan setelah itu barulah revenue akan datang.

Dan di sinilah Reid dites dan diuji. Karakter pertama yang harus dimiliki oleh seorang entrepreneur adalah persistence, perserverance, kegigihan, keuletan, tahan banting atau keukeuh!
Dan Reid pun meneruskan perjuangannya, menjual ideanya, mencari investor yang akan mengucurkan dananya untuk ide bisnis ini.

The rest is history! Sekarang LinkedIn sudah berhasil menjadi platform yang digunakan oleh hampir semua profesional.
Sebagian besar perusahaan juga menggunakan LinkedIn untuk mencari kandidat untuk posisi posisi penting di perusahaan.
Growth sudah tercapai, Engagement sudah terbangun, dan tentu saja Revenue pun mengalir deras!

Dalam sebuah interview, Reid Hoffman, menyampaikan 3 kunci keberhasilannya dalam perjalanannya sebagai seorang entrepreneur ...

0.EXECUTE YOUR BIGGEST IDEA

Coba pikirkan beberapa ide untuk bisnis anda. Terutama ide yang benar benar bisa menjadi solusi untuk masalah besar yang dihadapi calon customer anda.
Pikirkan dari kacamata "must have" (harus dipunyai untuk menyelesaikan masalah terbesar mereka) dan bukannya yang "nice-to-have" (sesuatu yang ok kalau kita punya tapi kalo enggak juga gak apa apa).

Banyak perusahaan yang punya masalah mencari kandidat dengan profile yang cocok. Dan banyak executive yang juga ingin mengetes market dan menjajakan competence nya. Dua masalah inilah yang dijdikan oleh LinkedIn sebagai ide awal dari bisnis mereka.

Dan kemudian implementasikan ide anda yang paling besar (yang akan membawa manfaat terbesar bagu customer anda, dan akan bisa membawa impact kepada banyak perusahaan atau individual).

0.BUILD AN EXTREME DIFFERENTIATOR

Sejak awal, anda sudah harus memikirkan apakah yang akan membedakan product anda dengan yang lainnya.
Saya menyebutnya sebagai extreme differentiator (bukan hanya differentiator).

LinkedIn  berbeda dari yang lain karena mereka khusus digunakan oleh para profesional executive untuk me-market-kan profesional skills mereka.
Data base mereka juga luar biasa besar. Dan mereka mempunyai algoritma yang canggih dan super cepat untuk search engine mereka.

You have to think what is your extreme differentator, why your customer will choose you (and eventually will pay you).

0.LAUNCH EARLY TO THE MARKET

Jangan menunggu product anda untuk menjadi sempurna!
Luncurkan product anda ke market, pelajari kekurangannya, perbaiki  :-! luncurkan lagi.
Lakukan itu berulang ulang agar product anda semakin baik.
Kalau anda menunggu product yang sempurna, dan anda menunda-nunda, competitor anda akan lebih cepat dan lebih dulu daripada anda.

Launch, learn, improve and re-launch. Do it repetitively!


Jadi ingat ya, the 1-2-3 rules of Entrepreneurship yang kita pelajari dari LinkedIn ...

0.EXECUTE YOUR BIGGEST IDEA
0.BUILD AN EXTREME DIFFERENTIATOR
0.LAUNCH EARLY TO THE MARKET



Salam Hangat

Pambudi Sunarsihanto

Viewing all 2832 articles
Browse latest View live