Welcome!

News Feed Item

Java Concurrency and Scalability Platform Akka Celebrates Fifth Anniversary

Akka Raised the Standard for Handling Scale and Failure on the JVM; Today Its Developer Community Continues to Grow Across Data Streaming and Internet of Things Use Cases

SAN FRANCISCO, CA -- (Marketwired) -- 07/10/14 -- Typesafe, provider of the world's leading Reactive platform, today announced that July 12 will mark the five year anniversary of Akka, the popular run-time and toolkit for concurrency and scalability on the JVM ("Java Virtual Machine"), supported through the years by developers at high-growth and Blue-Chip companies like Amazon, BBC, Cisco, Credit Suisse, eBay, Groupon, Huffington Post and many more.

The Akka Creation Story (click here for a full interactive timeline on the history of Akka):

Akka was originally created by Swedish programmer Jonas Bonér -- who had built compilers, runtimes and open source frameworks for distributed applications at vendors like BEA and Terracotta. He'd experienced the scale and resilience limitations of CORBA, RPC, XA, EJBs, SOA, and the various Web Services standards and abstraction techniques that Java developers used to approach the overall problem set over the last 20 years. He'd lost faith in those ways of doing things.

This time he looked outside of the Java and enterprise space for answers. He spent some time with the Oz and Erlang programming languages. He saw a lot that he liked about how Erlang managed failure for services that simply could not go down (things like telecom switches for emergency calls), and how principles from Erlang and Oz could be applied towards the concurrency and distributed computing frontiers for mainstream enterprises. In particular he saw the Actor Model -- which emphasizes loose coupling and embracing failure in software systems and dataflow concurrency -- as the bridge to the future.

After months of intense thinking and hacking, Bonér shared his vision for the Akka Actor Kernel (now simply "Akka") on the Scala mailing list, and about a month later (on July 12, 2009) shared the first public release of Akka 0.5 on GitHub. Today Akka is the open source platform that major financial institutions use to handle billions of transactions, and that massively trafficked sites like Walmart and Gilt use to scale their services for peak usage. A full interactive timeline of the history of Akka (including a list of contributors) may be viewed here.

Recent Akka Highlights

As the Akka community has grown, the platform has been leveraged to power highly trafficked web sites, data and analytics, shuffling large amounts of data around, batch processing, real-time processing, and other distributed computing use cases where success means achieving low latency and high throughput. In recent years, several key growth areas have emerged for Akka:

Akka Cluster
In July 2013, version 2.2 of Akka shipped under the code name "Coltrane" and included full support of clustering. Akka Cluster provides a fault-tolerant decentralized peer-to-peer based cluster membership service with no single point of failure or single point of bottleneck. It does this using gossip protocols and an automatic failure detector. It also ships with a suite of high-level modules on top providing things like clustered Pub/Sub, clustered singleton, cluster sharding and more.

Akka Persistence
Predictably handling failure across distributed systems is Akka's calling card. But what happens to the Actor's state when things start failing? In October 2013, Akka Persistence was introduced to allow stateful actors to recover from JVM crashes in a way that Actors themselves are persisted in memory. The key concept in Akka Persistence is called Event Sourcing and allows you to -- instead of storing an actor's state directly -- persist the state-changing events that are sent to the Actor. These changes are immutable facts that are appended to a journal (backed by a pluggable durable storage), which allows for very high transaction rates, efficient replication, migration, replay, auditing, and another powerful layer of failure management.

Akka Streams
Historically, stream-based processing on the JVM ("Java Virtual Machine") has been perilous for both developers and operations, because when data is streamed at higher rates than recipients can handle, it builds up in the system until no space is left, leading to system failures in production. In April 2014, Typesafe announced the release of Akka Streams -- designed to help developers more easily achieve truly asynchronous, non-blocking data streaming on the JVM.

Akka HTTP
In June 2013, Typesafe acquired Spray.io, one of the best performing REST / HTTP libraries in the Java ecosystem. Then in June 2014, Typesafe announced the first preview of the core module of Akka HTTP -- a suite of lightweight Scala libraries providing client/server RESTful support on top of Akka. It fully embraces the Actor-, Future-, and Stream-based programming models used by the underlying platform. This lets developers build high-performant and scalable on RESTful applications with idiomatic Java and Scala code without worrying about wrapping around other Java libraries.

Recent Akka Presentations at Scala Days 2014:

Additional Resources:

About Typesafe
Typesafe (Twitter: @Typesafe) is dedicated to helping developers build Reactive applications on the JVM. With the Typesafe Reactive Platform, you can create modern, event-driven applications that scale on multicore and cloud computing architectures. Typesafe Activator, a browser-based tool with reusable templates, makes it easy to get started with Play Framework, Akka and Scala. Backed by Greylock Partners, Shasta Ventures, Bain Capital Ventures and Juniper Networks, Typesafe is headquartered in San Francisco with offices in Switzerland and Sweden. To start building Reactive applications today, download Typesafe Activator!

Image Available: http://www2.marketwire.com/mw/frame_mw?attachid=2636189

More Stories By Marketwired .

Copyright © 2009 Marketwired. All rights reserved. All the news releases provided by Marketwired are copyrighted. Any forms of copying other than an individual user's personal reference without express written permission is prohibited. Further distribution of these materials is strictly forbidden, including but not limited to, posting, emailing, faxing, archiving in a public database, redistributing via a computer network or in a printed form.

Latest Stories
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
The revocation of Safe Harbor has radically affected data sovereignty strategy in the cloud. In his session at 17th Cloud Expo, Jeff Miller, Product Management at Cavirin Systems, discussed how to assess these changes across your own cloud strategy, and how you can mitigate risks previously covered under the agreement.
Digital Initiatives create new ways of conducting business, which drive the need for increasingly advanced security and regulatory compliance challenges with exponentially more damaging consequences. In the BMC and Forbes Insights Survey in 2016, 97% of executives said they expect a rise in data breach attempts in the next 12 months. Sixty percent said operations and security teams have only a general understanding of each other’s requirements, resulting in a “SecOps gap” leaving organizations u...
In his session at 21st Cloud Expo, Raju Shreewastava, founder of Big Data Trunk, provided a fun and simple way to introduce Machine Leaning to anyone and everyone. He solved a machine learning problem and demonstrated an easy way to be able to do machine learning without even coding. Raju Shreewastava is the founder of Big Data Trunk (www.BigDataTrunk.com), a Big Data Training and consulting firm with offices in the United States. He previously led the data warehouse/business intelligence and Bi...
While some developers care passionately about how data centers and clouds are architected, for most, it is only the end result that matters. To the majority of companies, technology exists to solve a business problem, and only delivers value when it is solving that problem. 2017 brings the mainstream adoption of containers for production workloads. In his session at 21st Cloud Expo, Ben McCormack, VP of Operations at Evernote, discussed how data centers of the future will be managed, how the p...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
Extreme Computing is the ability to leverage highly performant infrastructure and software to accelerate Big Data, machine learning, HPC, and Enterprise applications. High IOPS Storage, low-latency networks, in-memory databases, GPUs and other parallel accelerators are being used to achieve faster results and help businesses make better decisions. In his session at 18th Cloud Expo, Michael O'Neill, Strategic Business Development at NVIDIA, focused on some of the unique ways extreme computing is...
Transformation Abstract Encryption and privacy in the cloud is a daunting yet essential task for both security practitioners and application developers, especially as applications continue moving to the cloud at an exponential rate. What are some best practices and processes for enterprises to follow that balance both security and ease of use requirements? What technologies are available to empower enterprises with code, data and key protection from cloud providers, system administrators, inside...
The Internet of Things will challenge the status quo of how IT and development organizations operate. Or will it? Certainly the fog layer of IoT requires special insights about data ontology, security and transactional integrity. But the developmental challenges are the same: People, Process and Platform and how we integrate our thinking to solve complicated problems. In his session at 19th Cloud Expo, Craig Sproule, CEO of Metavine, demonstrated how to move beyond today's coding paradigm and sh...
With the proliferation of both SQL and NoSQL databases, organizations can now target specific fit-for-purpose database tools for their different application needs regarding scalability, ease of use, ACID support, etc. Platform as a Service offerings make this even easier now, enabling developers to roll out their own database infrastructure in minutes with minimal management overhead. However, this same amount of flexibility also comes with the challenges of picking the right tool, on the right ...
What are the new priorities for the connected business? First: businesses need to think differently about the types of connections they will need to make – these span well beyond the traditional app to app into more modern forms of integration including SaaS integrations, mobile integrations, APIs, device integration and Big Data integration. It’s important these are unified together vs. doing them all piecemeal. Second, these types of connections need to be simple to design, adapt and configure...
Traditional on-premises data centers have long been the domain of modern data platforms like Apache Hadoop, meaning companies who build their business on public cloud were challenged to run Big Data processing and analytics at scale. But recent advancements in Hadoop performance, security, and most importantly cloud-native integrations, are giving organizations the ability to truly gain value from all their data. In his session at 19th Cloud Expo, David Tishgart, Director of Product Marketing ...
More and more companies are looking to microservices as an architectural pattern for breaking apart applications into more manageable pieces so that agile teams can deliver new features quicker and more effectively. What this pattern has done more than anything to date is spark organizational transformations, setting the foundation for future application development. In practice, however, there are a number of considerations to make that go beyond simply “build, ship, and run,” which changes how...
Cloud computing delivers on-demand resources that provide businesses with flexibility and cost-savings. The challenge in moving workloads to the cloud has been the cost and complexity of ensuring the initial and ongoing security and regulatory (PCI, HIPAA, FFIEC) compliance across private and public clouds. Manual security compliance is slow, prone to human error, and represents over 50% of the cost of managing cloud applications. Determining how to automate cloud security compliance is critical...
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, will outline ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and...