EzDevInfo.com

astyanax

Cassandra Java Client

InvalidRequestException(why:Too many bytes for comparator) on execute query to composite columns using Asytanax

I'm trying to fetch out of composite columns using Astyanax 1.0.9 and got "InvalidRequestException(why:Too many bytes for comparator)"

Here is my CF:

CREATE TABLE user_attributes (
user_id bigint,
attr_name ascii,
attr_value text,
last_sync_timestamp bigint,
last_sync_digest text,
PRIMARY KEY (user_id, attr_name)
);

I can read data out with CQL:

select * from user_attributes where user_id = 1 and attr_name = 'mock';

Here is my POJO for composite column:

public static class UserAttributeCassandraTuple implements Comparable {
    public @Component(ordinal = 0) String attrName;
    public @Component(ordinal = 1) String attrValue;
    public @Component(ordinal = 2) long lastSyncTimeStamp;
    public @Component(ordinal = 3) String lastSyncDigest;

    public int compareTo(Object o) { /* impl omitted here */ }
    public int hashCode() { /* impl omitted here */ }
    public boolean equals(Object o) { /* impl omitted here */ }
}

Here is my test driver: (keyspace is setup in junit @before and works fine for non-composite columns)

@Test
public void test_user_attributes() throws Exception {

    ColumnFamily<BigInteger, UserAttributeCassandraTuple> CF_USER_ATTR = new ColumnFamily<BigInteger, UserAttributeCassandraTuple>(
        "user_attributes", // Column Family Name
        BigIntegerSerializer.get(), // Key Serializer
        userAttributeSerializer); // Column Serializer

    // proto column for "mock"
    UserAttributeCassandraTuple mockColumn = new UserAttributeCassandraTuple();
    mockColumn.attrName = "mock";

    OperationResult<ColumnList<UserAttributeCassandraTuple>> result = keyspace.prepareQuery(CF_USER_ATTR)
    .getKey(BigInteger.valueOf(1))
    .withColumnSlice(mockColumn, mockColumn)
    .execute();
    ColumnList<UserAttributeCassandraTuple> columns = result.getResult();

    for (Column<UserAttributeCassandraTuple> c : columns) {
        System.out.println(c.getName() + "=" + c.getStringValue());
    }
}

It compiles fine, but failed during execute():

com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=22(44), attempts=1] InvalidRequestException(why:Too many bytes for comparator)
at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$1$2.execute(ThriftColumnFamilyQueryImpl.java:196)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$1$2.execute(ThriftColumnFamilyQueryImpl.java:188)
at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$1.execute(ThriftSyncConnectionFactoryImpl.java:132)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:52)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:229)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$1.execute(ThriftColumnFamilyQueryImpl.java:186)
at com.ebay.raptor.search.test.srp.domain.cassandra.CassandraTest.test_user_attributes(CassandraTest.java:118)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:60)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37)
at java.lang.reflect.Method.invoke(Method.java:611)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:76)
at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:49)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
Caused by: InvalidRequestException(why:Too many bytes for comparator)
at org.apache.cassandra.thrift.Cassandra$get_slice_result.read(Cassandra.java:7196)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_get_slice(Cassandra.java:543)
at org.apache.cassandra.thrift.Cassandra$Client.get_slice(Cassandra.java:527)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$1$2.internalExecute(ThriftColumnFamilyQueryImpl.java:201)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$1$2.internalExecute(ThriftColumnFamilyQueryImpl.java:188)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:55)
... 30 more

I tried replacing BigInteger with Long, but got the same error.

Any suggestion what I am doing wrong?

thanks Chuck


Source: (StackOverflow)

Hector vs Astyanax for Cassandra [closed]

We are starting a new java web-project with Cassandra as the database. The team is very well-experienced with RDBMS/JPA/Hibernate/Spring but very new to the world of NoSQL. We want to start the development with as simple setup as possible. Hector seems to be the most preferred and popular choice for connecting to Cassandra. But, Netflix has recently offered Astyanax, which has its origins in Hector. Can anyone who has used both these technologies share their experiences? I am looking for easy setup, good documentation and simple/clean usage. Suggestions about other api's are also welcome.


Source: (StackOverflow)

Advertisements

New Cassandra project - Astyanax or Java Driver?

I'm starting a new project with Cassandra (and plan to use the latest stable (1.2.x) version). I have tried several different Java libraries, like Hector, Astyanax, Cassandra-jdbc...

Among them, (in short) my choice is Astyanax. But then I also found and tried DataStax's Java Driver, which support new CQL binary protocol, and is much cleaner if you are only using CQL. And it seems version 1.0.0 GA will be released soon.

Which one would you recommend? Thanks.


Source: (StackOverflow)

play framework 2 performance issues on virtual machine

I have recently implemented very tiny cassandra web application on both php and play framework to compare these technologies. I'm running these tests on a virtual machine which has ubuntu-server in it. In both php and play framework applications, there is only one url that makes an insertion to a cassandra keyspace.

In php, I ran the following apache benchmark test;

ab -n 100000 -c 100 http://mydomain.com/insert The test results show that the server can serve 120#/sec (requests per sec)

I have made almost same application in play framework using netflix's astyanax cassandra library. However, the server seems to be crushing even in start of ab.

I'm making the play framework test in production, by play start command on terminal.

So, I know that play framework is production ready. So, what am I doing wrong here ?


Source: (StackOverflow)

How can I set GCGraceSeconds in Cassandra using astyanax?

I need to set GCGraceSeconds to 0, because I have only one node, but I cannot find where I can set value to this. Is it possible to set from astyanax or is it in some settings file?


Source: (StackOverflow)

how to read all 1000 rows from cassandra CF with astyanax

We have this one CF that only has about 1000 rows. Is there a way with astyanax to read all 1000 rows? Does thrift even support that?

thanks, Dean


Source: (StackOverflow)

Astyanax: simple write throwing this exception: Not enough bytes to read value of component

I am new to astyanax and trying some sample programs and getting this error. This is a simple write and looks like am doing some thing basic wrong. Not using composite keys. Using version 1.56.29. Any help is really appreciated.

Caused by: InvalidRequestException(why:Not enough bytes to read value of component 0)
    at org.apache.cassandra.thrift.Cassandra$batch_mutate_result.read(Cassandra.java:20833)
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
    at org.apache.cassandra.thrift.Cassandra$Client.recv_batch_mutate(Cassandra.java:964)
    at org.apache.cassandra.thrift.Cassandra$Client.batch_mutate(Cassandra.java:950)
    at com.netflix.astyanax.thrift.ThriftKeyspaceImpl$1$1.internalExecute(ThriftKeyspaceImpl.java:120)
    at com.netflix.astyanax.thrift.ThriftKeyspaceImpl$1$1.internalExecute(ThriftKeyspaceImpl.java:117)
    at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:56)

Heres the code:

    AstyanaxContext<Keyspace> context = new AstyanaxContext.Builder()
        .forCluster(CLUSTER_NAME)
        .forKeyspace(keySpaceName)
        .withAstyanaxConfiguration(new AstyanaxConfigurationImpl()      
            .setDiscoveryType(NodeDiscoveryType.RING_DESCRIBE)
            .setCqlVersion("3.0.0")
            .setTargetCassandraVersion("1.2")
        )
        .withConnectionPoolConfiguration(new ConnectionPoolConfigurationImpl("MyConnectionPool")
            .setPort(50825)
            .setMaxConnsPerHost(10)
            .setSeeds("nodename:50825")
            .setConnectTimeout(20000)
        )
        .withConnectionPoolMonitor(new CountingConnectionPoolMonitor())
        .buildKeyspace(ThriftFamilyFactory.getInstance());

    context.start();
    System.out.println("getting context.. done ");
    Keyspace keyspace = context.getEntity();
    MutationBatch m = keyspace.prepareMutationBatch();

    ColumnFamily<String, String> colFam = new ColumnFamily<String, String>("test",
            StringSerializer.get(), StringSerializer.get());

    m.withRow(colFam, "abc")
        .putColumn("col2", "test1", null);
    m.execute();

Heres the table describe:

CREATE TABLE test (
 col1 text PRIMARY KEY,
 col2 text,
 col3 text
) WITH
 bloom_filter_fp_chance=0.010000 AND
 caching='KEYS_ONLY' AND
 comment='' AND
 dclocal_read_repair_chance=0.000000 AND
 gc_grace_seconds=864000 AND
 read_repair_chance=0.100000 AND
 replicate_on_write='true' AND
 populate_io_cache_on_flush='false' AND
 compaction={'class': 'SizeTieredCompactionStrategy'} AND
 compression={'sstable_compression': 'SnappyCompressor'};

Source: (StackOverflow)

Cassandra CQL3 support in Astyanax

Does Astyanax support "insert into" via prepared statement with CQL3? I use the latest Astyanax library 1.56.24 and Cassandra 1.2.1. When I try to execute a prepared statement with CQL3:

keyspace.prepareQuery(conn.CF_CONTACTS)
  .withCql("INSERT INTO contacts (a, b) VALUES (?, ?);")
  .asPreparedStatement()
  .withStringValue("123")
  .withStringValue("456")
  .execute();;

I get the following exception:

Caused by: InvalidRequestException(why:Cannot execute/prepare CQL2 statement since the CQL has been set to CQL3(This might mean your client hasn't been upgraded correctly to use the new CQL3 methods introduced in Cassandra 1.2+).)
at org.apache.cassandra.thrift.Cassandra$prepare_cql_query_result.read(Cassandra.java:38738)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.cassandra.thrift.Cassandra$Client.recv_prepare_cql_query(Cassandra.java:1598)
at org.apache.cassandra.thrift.Cassandra$Client.prepare_cql_query(Cassandra.java:1584)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$6$3$1.internalExecute(ThriftColumnFamilyQueryImpl.java:747)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$6$3$1.internalExecute(ThriftColumnFamilyQueryImpl.java:742)
at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:56)

Source: (StackOverflow)

Astyanax's EntityPersister & Collection Updates

Background

Astyanax's Entity Persister saves a Map of an Entity in multiple columns. The format is mapVariable.key

The Problem:

The astyanax's Entity Persister doesn't remove deleted key/value pairs from cassandra when a Map in an Entity has been updated

The Solution I'm Using Now (bad approach)

I'm deleting the whole row, and then reinsert it

Some More Info

I persist my java objects in cassandra using astyanax's Entity Persister (com.netflix.astyanax.entitystore).

What I've noticed is that when an Entity's Map is persisted with, say, 2 values: testkey:testvalue & testkey2:testvalue2, and the next time the same Entity's Map is persisted with one value (one key/value pair was removed): testkey:testvalue, the testkey2:testvalue2 isn't deleted from the column family.

So, as a work-around, I need to delete the whole row and then reinsert it.

My insertion code:

        final EntityManager<T, String> entityManager = new DefaultEntityManager.Builder<T, String>()
            .withEntityType(clazz)
            .withKeyspace(getKeyspace())
            .withColumnFamily(columnFamily)
            .build();
    entityManager.put(entity);

What am I missing? This is really inefficient and I think astyanax's entity persister is supposed to take care of this on its own.

Any thoughts?


Source: (StackOverflow)

Astyanax composite column put using AnnotatedCompositeSerializer becomes un-gettable after a few hours via multiple clients

I'm having a bizarre issue. My java application puts columns into cassandra via astyanax. This seems to work temporarily, but after several hours the column seemingly disappears if I get by [row][composite column]. If I fetch the whole row, or get a range of columns that ought to include the column, then the column is returned. This behavior occurs in multiple clients, including the cassandra-cli and pycassa. For example:

get msg_metadata['someKey'];
=> (column=185779:completed, value={"timestamp":"1407777081","id":"167727"}, timestamp=1407777083241001)
Returned 1 results.
Elapsed time: 58 msec(s)

get msg_metadata['someKey']['185779:completed'];
=> (column=185779:completed, value={"timestamp":"1407777081","id":"167727"}, timestamp=1407777083241001)
Returned 1 results.
Elapsed time: 42 msec(s)

-- several hours later
get msg_metadata['someKey']['185779:completed'];
Value was not found
Elapsed time: 72 msec(s).

get msg_metadata['someKey'];
=> (column=185779:completed, value={"timestamp":"1407777081","id":"167727"}, timestamp=1407777083241001)
Returned 1 results.
Elapsed time: 107 msec(s)

I created the following column family in a Cassandra 1.1.12 cluster:

create column family msg_metadata
with column_type = 'Standard'
and comparator = 'CompositeType(org.apache.cassandra.db.marshal.IntegerType,org.apache.cassandra.db.marshal.AsciiType)'
and default_validation_class = 'UTF8Type'
and key_validation_class = 'AsciiType';

I have the following code, using Astyanax 1.0.3

public class ColumnFamilies {
    public static final AnnotatedCompositeSerializer<MyField> MY_FIELD =
        new AnnotatedCompositeSerializer<MyField>(MyField.class);

    public static final ColumnFamily<String, MyField> MESSAGE_METADATA =
        new ColumnFamily<String, MyField>(
                "msg_metadata", AsciiSerializer.get(), MY_FIELD);

    public static class MyField {
        @Component(ordinal = 0)
        private Integer myId;

        @Component(ordinal = 1)
        private String fieldName;

        public CustomerField(Integer myId, String fieldName) {
            this.myId = myId;
            this.fieldName = fieldName;
        }
    }
}

My application writes to the column family like so:

    final MutationBatch mutationBatch = MESSAGE_METADATA.prepareMutationBatch();
    final MyField field = new MyField(myId, fieldName);
    mutationBatch.withRow(rowKey).putColumn(field, value, null);
    mutationBatch.execute();

This has been baffling me for some time. Initially I thought the issue might be the column family. I've tried creating new column families and it hasn't helped. My suspicion is that there's something messed up with the composite column serialization, but that's just my intuition. Any ideas what's going on and how I can fix the issue? Thanks!


Source: (StackOverflow)

Astyanax client maximum connections per node?

I am reading the data from Cassandra database using the Astyanax client.

I have around one million unique rows in a Cassandra database. I have a single cross colocation centre cluster with four nodes.

These are my four nodes:

  node1:9160
  node2:9160
  node3:9160
  node4:9160

I have KeyCaching enabled and SizeTieredCompaction strategy is enabled as well.

I have a client program which is multithreaded that will read the data from the Cassandra database using the Astyanax client and which I am running with 20 threads. If I am running my client program with 20 threads, then the performance of reading the data from Cassandra database degrades.

So the first thing that jumps to my mind is that there might be contention over connections to Cassandra (do they use a pool, if so how many connections are being maintained)? I am using the below code to make the connection using Astyanax client.

private CassandraAstyanaxConnection() {
    context = new AstyanaxContext.Builder()
    .forCluster(ModelConstants.CLUSTER)
    .forKeyspace(ModelConstants.KEYSPACE)
    .withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
        .setDiscoveryType(NodeDiscoveryType.RING_DESCRIBE)
    )
    .withConnectionPoolConfiguration(new ConnectionPoolConfigurationImpl("MyConnectionPool")
        .setPort(9160)
        .setMaxConnsPerHost(1)
        .setSeeds("nod1:9160,node2:9160,node3:9160,node4:9160")
    )
    .withAstyanaxConfiguration(new AstyanaxConfigurationImpl()
        .setCqlVersion("3.0.0")
        .setTargetCassandraVersion("1.2"))
    .withConnectionPoolMonitor(new CountingConnectionPoolMonitor())
    .buildKeyspace(ThriftFamilyFactory.getInstance());

    context.start();
    keyspace = context.getEntity();

    emp_cf = ColumnFamily.newColumnFamily(
        ModelConstants.COLUMN_FAMILY,
        StringSerializer.get(),
        StringSerializer.get());
}

Do I need to make any sort of changes in the above code to improve the performance?

What does this method do?

   setMaxConnsPerHost(1)

Do I need to increase that to improve the performance? I have four nodes, so I should change that to 4?

And will the setMaxConns(20) method call? Do I need to add that as well to improve the performance? As I will be running my program with multiple threads.


Source: (StackOverflow)

PoolTimeoutException when connecting to Cassandra via Astyanax

I am trying to connect to local cassandra using astyanax, but constantly getting PoolTimeoutException. I am able to connect to cassandra using cli or hector client. Any idea what am I doing wrong?

Thanks.

My code:

val context = new AstyanaxContext.Builder()
        .forCluster("cluster")
        .forKeyspace(keyspace)
        .withAstyanaxConfiguration(
                new AstyanaxConfigurationImpl()
                    .setDiscoveryType(NodeDiscoveryType.NONE)
        )
        .withConnectionPoolConfiguration(
                new ConnectionPoolConfigurationImpl("ConnPool")
                    .setPort(9160)
                    .setMaxConnsPerHost(1)
                    .setMaxBlockedThreadsPerHost(1)
                    .setSeeds("127.0.0.1:9160")
                    .setConnectTimeout(10000)
        )
        .withConnectionPoolMonitor(new CountingConnectionPoolMonitor())
        .buildKeyspace(ThriftFamilyFactory.getInstance())
    context.start()
    return context.getEntity()

Exception:

Exception in thread "main" java.lang.RuntimeException: com.netflix.astyanax.connectionpool.exceptions.PoolTimeoutException: PoolTimeoutException: [host=127.0.0.1(127.0.0.1):9160, latency=10004(10004), attempts=1] Timed out waiting for connection
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$7.getNextBlock(ThriftColumnFamilyQueryImpl.java:652)
at com.netflix.astyanax.thrift.ThriftAllRowsImpl$1.hasNext(ThriftAllRowsImpl.java:61)
at scala.collection.JavaConversions$JIteratorWrapper.hasNext(JavaConversions.scala:574)
at scala.collection.Iterator$class.foreach(Iterator.scala:772)
at scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:573)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:73)
at scala.collection.JavaConversions$JIterableWrapper.foreach(JavaConversions.scala:587)
at  at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:60)
at scala.App$$anonfun$main$1.apply(App.scala:60)
at scala.collection.LinearSeqOptimized$class.foreach(LinearSeqOptimized.scala:59)
at scala.collection.immutable.List.foreach(List.scala:76)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:30)
at scala.App$class.main(App.scala:60)
at Caused by: com.netflix.astyanax.connectionpool.exceptions.PoolTimeoutException: PoolTimeoutException: [host=127.0.0.1(127.0.0.1):9160, latency=10004(10004), attempts=1] Timed out waiting for connection
at com.netflix.astyanax.connectionpool.impl.SimpleHostConnectionPool.waitForConnection(SimpleHostConnectionPool.java:201)
at com.netflix.astyanax.connectionpool.impl.SimpleHostConnectionPool.borrowConnection(SimpleHostConnectionPool.java:158)
at com.netflix.astyanax.connectionpool.impl.RoundRobinExecuteWithFailover.borrowConnection(RoundRobinExecuteWithFailover.java:60)
at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:50)
at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:229)
at com.netflix.astyanax.thrift.ThriftColumnFamilyQueryImpl$7.getNextBlock(ThriftColumnFamilyQueryImpl.java:623)

Source: (StackOverflow)

How can I query a Cassandra cluster for its metadata?

We have a process creatively named "bootstrap" that sets up our Cassandra clusters for a given rev of software in an environment (Dev1, Dev2, QA, ..., PROD). This bootstrap Creates/Updates keyspaces and column families as well as populating initial data in non-prod.

We are using Astyanax, but we could use Hector for bootstrapping.

Given that another team has decided that each environment will have its own datacenter names. And Given that I want this to work in prod when we go from two to more datacenters. And Given that we will be using PropertyFileSnitch:

How can I ask the Cassandra cluster for its layout? (Without shelling to nodetool ring)

Specifically, I need to know the names of the datacenters so I can Create or Update a keyspace with the correct settings for strategy options when using NetworkTopologyStrategy. We want 3 copies per datacenter. Some envs have one and several have two, eventually production will have more.

Is there CQL or a Thrift call that will give me info about the cluster layout?

I have looked though several TOCs in various doc sets, and googled a bit. I thought I would ask here before digging though the nodetool code.


Source: (StackOverflow)

Token Aware Astyanax Connection pool connecting on nodes without distributing connections over nodes

I was using astyanax connection pool defined as this:

ipSeeds = "LOAD_BALANCER_HOST:9160";
conPool.setSeeds(ipSeeds)
.setDiscoveryType(NodeDiscoveryType.TOKEN_AWARE)
.setConnectionPoolType(ConnectionPoolType.TOKEN_AWARE);

However, my cluster have 4 nodes and I have 8 client machines connecting on it. LOAD_BALANCER_HOST forwards requests to one of my four nodes.

On a client node, I have:

$netstat -an | grep 9160 | awk '{print $5}' | sort |uniq -c
    235 node1:9160
    680 node2:9160
      4 node3:9160
      4 node4:9160

So although the ConnectionPoolType is TOKEN_AWARE, my client seems to be connecting mainly to node2, sometimes to node1, but almost never to nodes 3 and 4.
Question is: Why is this happening? Shouldn't a token aware connection pool query the ring for the node list and connect to all the active nodes using round robin algorithm?


Source: (StackOverflow)

Secure communication between Astyanax and Cassandra

Has anyone come up with a way to secure communication between Cassandra and Astyanax client? SSL is preferred to be able to do client cert auth + encryption...


Source: (StackOverflow)