1 / 39

Hazel cast

Hazel cast. Talip Ozturk. talip@hazelcast.com. Agenda. Introduction Code Samples Demo Internals Q/A. What is Hazelcast?. In-Memory Data Grid (IMDG) Clustering and highly scalable data distribution solution for Java Distributed Data Structures for Java

blaze-berg
Download Presentation

Hazel cast

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hazelcast Talip Ozturk talip@hazelcast.com

  2. Agenda • Introduction • Code Samples • Demo • Internals • Q/A

  3. What is Hazelcast? • In-Memory Data Grid (IMDG) • Clustering and highly scalable data distribution solution for Java • Distributed Data Structures for Java • Distributed Hashtable (DHT) and more

  4. Why Hazelcast? • Scale your application • Share data across cluster • Partition your data • Send/receive messages • Balance the load • Process in parallel on many JVM

  5. Solutions in the Market • Oracle Coherence • IBM WebSphere eXtreme Scale / ObjectGrid • Terracotta • Gigaspaces • Gemstone • JBossCache/JGroups

  6. Difference • License / Cost • Feature-set • Ease of use • Main focus (distributed map, tuple space, cache, processing vs. data) • Light/Heavy weight

  7. Introducing Hazelcast • Open source (Apache License) • Super light, simple, no-dependency • Distributed/partitioned implementation of map, queue, set, list, lock and executor service • Transactional (JCA support) • Topic for pub/sub messaging • Cluster info and membership events • Dynamic clustering, backup, fail-over

  8. Data Partitioning in a Cluster If you have 5 million objects in your 5-node cluster, then each node will carry 1 million objects and 1 million backup objects. Server1 Server2 Server3 Server4 Server5

  9. SuperClient in a Cluster • -Dhazelcast.super.client=true • As fast as any member in the cluster • Holds no-data Server1 Server2 Server3 Server4 Server5

  10. Code Samples – Cluster Interface import com.hazelcast.core.*; import java.util.Set;Cluster cluster = Hazelcast.getCluster();cluster.addMembershipListener(listener);Member localMember  = cluster.getLocalMember();System.out.println (localMember.getInetAddress());Set setMembers  = cluster.getMembers();

  11. Code Samples – Distributed Map import com.hazelcast.core.Hazelcast; importjava.util.Map; Map<String, Customer> map = Hazelcast.getMap("customers"); map.put ("1", customer); Customer c = map.get("1");

  12. Code Samples – Distributed Queue import com.hazelcast.core.Hazelcast; importjava.util.concurrent.BlockingQueue; importjava.util.concurrent.TimeUnit; BlockingQueue<Task> queue = Hazelcast.getQueue(“tasks"); queue.offer(task); Task t = queue.poll(); Task t = queue.poll(5, TimeUnit.SECONDS);

  13. Code Samples – Distributed Set import com.hazelcast.core.Hazelcast; importjava.util.Set; Set<Price> set= Hazelcast.getSet(“IBM-Quote-History"); set.add (new Price (10, time1)); set.add (new Price (11, time2)); set.add (new Price (13, time3)); for (Price price : set) { // process price }

  14. Code Samples – Distributed Lock import com.hazelcast.core.Hazelcast; importjava.util.concurrent.locks.Lock; Lockmylock = Hazelcast.getLock(mylockobject); mylock.lock(); try { // do something } finally { mylock.unlock(); }

  15. Code Samples – Distributed Topic import com.hazelcast.core.*;public class Sample implements MessageListener {        public static void main(String[] args) {                 Sample sample = new Sample();Topic topic = Hazelcast.getTopic ("default");                 topic.addMessageListener(sample);                               topic.publish ("my-message-object");        }          public void onMessage(Object msg) {                System.out.println("Got msg :" + msg);        } }

  16. Code Samples – Distributed Events import com.hazelcast.core.IMap; import com.hazelcast.core.Hazelcast; import com.hazelcast.core.EntryListener; import com.hazelcast.core.EntryEvent; publicclassSampleimplementsEntryListener{ publicstaticvoid main(String[] args){ Sample sample =newSample();              IMap   map   =Hazelcast.getMap ("default");              map.addEntryListener (sample,true);              map.addEntryListener (sample,"key"); } publicvoid entryAdded(EntryEventevent){ System.out.println("Added "+event.getKey()+":"+event.getValue()); } publicvoid entryRemoved(EntryEventevent){ System.out.println("Removed "+event.getKey()+":"+event.getValue()); } publicvoid entryUpdated(EntryEventevent){ System.out.println("Updated "+event.getKey()+":"+event.getValue()); } }

  17. Code Samples – Transactions import com.hazelcast.core.Hazelcast; import com.hazelcast.core.Transaction; import java.util.Map; import java.util.Queue; Map map = Hazelcast.getMap (“default”);Queue queue = Hazelcast.getQueue (“default”);Transactiontxn = Hazelcast.getTransaction();txn.begin(); try {       Object obj = queue.poll(); //process obj map.put (key, obj); txn.commit(); } catch (Exception e) { txn.rollback(); }

  18. Code Samples – Executor Service FutureTask<String> futureTask = new DistributedTask<String>(new Echo(input), member); ExecutorServicees =Hazelcast.getExecutorService(); es.execute(futureTask); String result = futureTask.get();

  19. Executor Service Scenario publicint addBonus(long customerId,int extraBonus){ IMap<Long, Customer> mapCustomers = Hazelcast.getMap("customers");       mapCustomers.lock(customerId); Customer customer = mapCustomers.get(customerId);        int currentBonus = customer.addBonus(extraBonus);        mapCustomers.put(customerId, customer);        mapCustomers.unlock(customerId); return currentBonus; }

  20. Send computation over data publicclassBonusAddTaskimplementsCallable<Integer>,Serializable{ privatestaticfinallong serialVersionUID =1L; privatelong customerId; privatelong extraBonus; publicBonusAddTask(){ } publicBonusAddTask(long customerId,int extraBonus){ this.customerId = customerId; this.extraBonus = extraBonus; } publicInteger call (){ IMap<Long,Customer> mapCustomers =Hazelcast.getMap("customers");                 mapCustomers.lock(customerId); Customer customer = mapCustomers.get(customerId);                 int currentBonus = customer.addBonus(extraBonus);                 mapCustomers.put(customerId, customer);                 mapCustomers.unlock(customerId); return currentBonus; } }

  21. Send computation over data publicint addBonus(long customerId,int extraBonus){ ExecutorService es =Hazelcast.getExecutorService(); FutureTask<Integer> task = newDistributedTask<Integer>(new BonusAddTask(customerId, extraBonus), customerId);            es.execute(task); int currentBonus = task.get(); return currentBonus; }

  22. Configuration <hazelcast> <group> <name>dev</name> <password>dev-pass</password> </group> <network> <portauto-increment="true">5701</port> <join> <multicastenabled="true"> <multicast-group>224.2.2.3</multicast-group> <multicast-port>54327</multicast-port> </multicast> <tcp-ipenabled="false"> <interface>192.168.1.2-5</interface> <hostname>istanbul.acme</hostname> </tcp-ip> </join> <interfacesenabled="false"> <interface>10.3.17.*</interface> </interfaces> </network> <executor-service> <core-pool-size>16</core-pool-size> <max-pool-size>64</max-pool-size> <keep-alive-seconds>60</keep-alive-seconds> </executor-service> <queuename="default"> <max-size-per-jvm>10000</max-size-per-jvm> </queue> </hazelcast>

  23. DEMO

  24. Internals : Threads • User threads (client threads) • ServiceThread (com.hazelcast.impl.ClusterService) • InThread • OutThread • MulticastThread • ExecutorService Threads

  25. Internals : Cluster Membership • Multicast and Unicast Discovery • Every member sends heartbeats to the oldest member • Oldest Member manages the memberships • Sends member list • Tells members to sync their data

  26. Internals : Serialization • Optimized for String, byte[], Long, Integer • Custom serialization with (com.hazelcast.nio.DataSerializable) • Standard Java Serialization

  27. Internals : Serialization public class Address implements com.hazelcast.nio.DataSerializable { private String street; private int zipCode; private String city; private String state; public Address() {} //getters setters.. public void writeData(DataOutput out) throws IOException { out.writeUTF(street); out.writeInt(zipCode); out.writeUTF(city); out.writeUTF(state); } public void readData (DataInput in) throws IOException { street = in.readUTF(); zipCode = in.readInt(); city = in.readUTF(); state = in.readUTF(); } }

  28. Internals : Serialization public class Employee implements com.hazelcast.nio.DataSerializable { private String firstName; private String lastName; private int age; private double salary; private Address address; //address itself is DataSerializable public Employee() {} public void writeData(DataOutput out) throws IOException { out.writeUTF(firstName); out.writeUTF(lastName); out.writeInt(age); out.writeDouble (salary); address.writeData (out); } public void readData (DataInput in) throws IOException { firstName = in.readUTF(); lastName = in.readUTF(); age = in.readInt(); salary = in.readDouble(); address = new Address(); address.readData (in); } }

  29. Internals : Serialization • Hazelcast doesn't work with your objects directly • Hazelcast works with com.hazelcast.nio.Dataonly • Datais the binary representation of your object • Datais a list of re-used java.nio.ByteBuffers

  30. Internals : ObjectPool • Thread-aware object pool • Try the thread's lock-free queue first • If thread's queue is full/empty, go to the global (concurrent) queue • See com.hazelcast.impl.ThreadContext.ObjectPool

  31. Internals : Sockets • Java NIO (none-blocking mode) • There are only 2 threads for read/write regardless of the cluster size • InThread for read and accept • OutThread for write and connect • A pool of java.nio.ByteBuffersis used

  32. Internals : Sockets • Packet travels over the wire com.hazelcast.nio.PacketQueue.Packet • Packet structure: • Packet objects are also re-used • Processed only by ServiceThread

  33. Internals : Map.put(key, value) • com.hazelcast.nio.Data • com.hazelcast.impl.BaseManager.Call • com.hazelcast.impl.BaseManager.Request • com.hazelcast.nio.Packet • com.hazelcast.impl.BaseManager.PacketProcessor

  34. Internals : Map.put(key, value) • Convert key-value objects to Data instances • Hash of the key tells us which member is the owner • If owner is local simply put the key/value • If remote • Send it to the owner member • Read the response • If remote owner dies, re-do

  35. Internals : Distributed Map • Fixed number of blocks (segments) • Each key falls into one of these blocks • Each block is owned by a member • Every member knows the block owners • blockId = hash(keyData) % BLOCK_COUNT • Block ownership is reassigned upon membership change • Blocks and keys migrate for load-balancing

  36. Internals : Distributed Queue • The oldest member creates blocks as needed • Every member knows the block owners • Items are added into the blocks • No migration happens; short lived objects • Each member holds a takeBlockId and putBlockId • ‘Go-Next’ if the target is wrong or block is full/empty

  37. JVM -1 JVM -2 map.put (key, value) TCP/IP PacketProcessor Call: MPut Request: Data key Data value Request: Data key Data value Packet Owner ? Owner ? No Yes Process Request Process Request

  38. Planned Features • Eviction support • Distributed MultiMap implementation • Load/Store interface for persistence • Distributed java.util.concurrent. {DelayQueue, Semaphore, CountDownLatch} • Distributed Tuple Space • Pure Java and C# clients

  39. Questions? • http://www.hazelcast.com • http://code.google.com/p/hazelcast/ • hazelcast@googlegroups.com

More Related