Sebastian Kropp

Science, Technology and Enterprise Architecture Blog

Enterprise Architecture for Healthcare

| Comments

Accelerating Change

The healthcare system in the US is changing rapidly. The Affordable Care Act (ACA) introduced regulations requiring insurers and providers to change at an unprecedented pace. Whether these changes are for the better or the worse depends on your perspective. But actors not able to adapt to the new lay of the land are going to be left behind.

So what has changed? The ACA is or was politically very contentious. So it might be surprising to some, that if not the ACA, something similar would have been put in place. Looking at the history of health care reform in the United States it becomes clear that the main trajectory remained the same. All these reforms are increasing the pressure for improved health care with a market based philosophy in mind.

D3 Update Pattern on Nested Data

| Comments

This post builds on Mike Bostock’s great tutorial on how selection works on nested data and his series on the update pattern. To make the example more realistic, let us build a table that shows counts of log messages for different applications and for the severity levels DEBUG, INFO, WARN, ERROR, and FATAL. The table will update itself to changes in the log count data. Messages could be pushed by WebSockets, but we will just simulate this for now.

Here is how the finished logging table application looks like. Feel free to play around with it.

Try it out on jsFiddle: http://jsfiddle.net/skropp/k43r9qmc/10/

You can easily adapt this pattern to show nested bar charts or similar.

How Hadoop Does Not Scale

| Comments

We currently read a lot about how good Hadoop scales by mapping data and processes out to commodity nodes. To disappoint you right away, this post is not a general criticism of Hadoop. I do not even want to argue that Hadoop does not scale logarithmically. There are already a lot of papers that look at which algorithms are suited for MapReduce.

The purpose of this post is to look at slightly different aspects of scale, mostly aspects from an enterprise and financial viewpoint. The meaning of Hadoop changes rapidly. When I refer to Hadoop, I mean the traditional way of batch processing with MapReduce and not the amazing community of ambitious developers trying to find better ways for society to cope with the Big Data challenge.

Why Hadoop does not scale

Big Data Visualization and D3

| Comments

D3.js is a great JavaScript library to visualize data. Visualization is an overlooked aspect of the Big Data picture. The real value of data is to gain an understanding and act accordingly. Visualization is a great way to make data understandable.

Additionally, as we look closer at “Velocity” as one of the 3Vs of Big Data, we need mechanisms to ingest and show events immediately. We need libraries that are compatible with our Event-driven architectures. Hey, we finally have WebSockets, let’s use them! Maintaining report schedules and running batch processes is so 1990ies and a huge overhead.