Class ReplicaAwareFailureHandler<I extends org.apache.cassandra.spark.common.model.CassandraInstance>

  • Type Parameters:
    I - CassandraInstance type
    Direct Known Subclasses:
    MultiClusterReplicaAwareFailureHandler

    public abstract class ReplicaAwareFailureHandler<I extends org.apache.cassandra.spark.common.model.CassandraInstance>
    extends java.lang.Object
    Handles write failures of a single cluster
    • Constructor Detail

      • ReplicaAwareFailureHandler

        public ReplicaAwareFailureHandler()
    • Method Detail

      • getFailedRanges

        public abstract java.util.List<ReplicaAwareFailureHandler.ConsistencyFailurePerRange> getFailedRanges​(TokenRangeMapping<I> tokenRangeMapping,
                                                                                                              JobInfo job,
                                                                                                              ClusterInfo cluster)
        Given the number of failed instances for each token range, validates if the consistency guarantees are maintained for the job
        Parameters:
        tokenRangeMapping - the mapping of token ranges to a Cassandra instance
        job - the job to verify
        cluster - cluster info
        Returns:
        list of failed token ranges that break consistency. This should ideally be empty for a successful operation.
      • addFailure

        public abstract void addFailure​(com.google.common.collect.Range<java.math.BigInteger> tokenRange,
                                        I instance,
                                        java.lang.String errMessage)
        Adds a new token range as a failed token range, with errors on given instance.

        It's guaranteed that failedRangesMap has overlapping ranges for the range we are trying to insert (Check constructor, we are adding complete ring first).

        So the scheme is to get list of overlapping ranges first. For each overlapping range get the failure map. Make a copy of the map and add new failure to this map. It's important we make the copy and not use the one returned from failedRangesMap map. As our range could be overlapping partially and the map could be used by other range.

        Parameters:
        tokenRange - the range which failed
        instance - the instance on which the range failed
        errMessage - the error that occurred for this particular range/instance pair
      • getFailedInstances

        public abstract java.util.Set<I> getFailedInstances()
        Returns:
        the set of all failed instances
      • getFailedRangesInternal

        protected abstract java.util.List<ReplicaAwareFailureHandler.ConsistencyFailurePerRange> getFailedRangesInternal​(TokenRangeMapping<I> tokenRangeMapping,
                                                                                                                         ConsistencyLevel cl,
                                                                                                                         @Nullable
                                                                                                                         java.lang.String localDC,
                                                                                                                         org.apache.cassandra.spark.data.ReplicationFactor replicationFactor)
        Given the number of failed instances for each token range, validates if the consistency guarantees are maintained for the size of the ring and the consistency level.
        Parameters:
        tokenRangeMapping - the mapping of token ranges to a Cassandra instance
        cl - the desired consistency level
        localDC - the local datacenter
        replicationFactor - replication of the enclosing keyspace
        Returns:
        list of failed token ranges that break consistency. This should ideally be empty for a successful operation.