Gluster 4.0

Kaushal (kshlm/kshlmster)

GlusterD Maintainer

What's next in Gluster?

AGENDA

  • Quick Gluster intro
  • Gluster-4.0

What Is Gluster?

  • Software defined network storage system
    • Distributed, Replicated/EC
    • No metadata server
  • Commodity hardware and scale-out
  • Posix compatible
    • Multiple-acces methods

GlusterFS Terms

  • Peer/Node/Server - A computer with the GlusterFS server packages installed
  • Trusted Storage Pool - The GlusterFS cluster
  • Client - accesses GlusterFS using the native protocol

GlusterFS Terms

  • Brick - Smallest unit of storage in GlusterFS
  • Volume - A logical collection of bricks, that appears as a single export to clients

Volume1

Volume2

Volume3

GlusterFS Terms

  • Translators - Modular bits of GlusterFS that implement the actual features
  • Volume graph - A graph of translators arranged together to create a volume

Client

Brick

Using Gluster

  • gluster peer probe <hostname>
  • gluster volume create <name> replica 2 <peername>:/path <peername>:/path ...
  • gluster volume start <volumename>
  • mount -t glusterfs <peername>:<volname> /<path to mountpoint>

PAST and Present

Origin

  • Gluster - GNU Cluster
    • Linux distro
    • Clustered supercomputer
  • GlusterFS provided storage
    • GlusterFS v1 - Part of main Gluster project
    • GlusterFS v2 - Split into a seperate project
    • GlusterFS v3 - Primary project

GlusterFS-3.x

  • 3.0 - December 2009
    • Protocol defined
  • 3.1 - October 2010
    • GlusterD, NFS

GlusterFS-3.x

  • 3.2 and beyond
    • EC, Snapshots
    • GFAPI, NFS Ganesha, Samba
    • Heketi
  • 3.13 - December 2017
    • Current release

Future

Gluster-4.0

  • Next major release
  • Late February 2018
  • Short term maintenance
  • Drop support for older distros
    • EL6

Gluster-4.0

  • Protocol changes*
  • GlusterD2*
  • Metrics
  • FIPS
  • Performance enhancements

Gluster-4.x

  • RIO - Relation Inherited Object distribution
  • JBR - Journal based replication
  • GFProxy
  • Halo replication
  • Thin arbiter
  • +1 scaling
  • More automated cluster management

Gluster-4.0

Protocol Changes

  • New on-wire RPC
  • Better XDR structs
    • More defined members
    • New dictionary
  • Old RPC version still available

GlusterD2

  • New management system for Gluster-4.0
    • Not backwards compatible with GD1
    • Backwards compatible with clients
  • From scratch rewrite, written in Go
  • Better scalability, integration and maintenance

GlusterD2

  • ReST/HTTP API
    • JSON requests and responses
  • New CLI
  • More flexible and pluggable internal frameworks
    • Transaction, Volgen, ReST, Events
  • Uses Etcd internally
    • automatic Etcd setup/management

GD2 in Gluster-4.0

  • Technology preview version

  • Most of GD1 commands to be implemented

  • Preliminary automatic volume creation

  • No upgrade/migration support from 3.x

GD2 in Gluster-4.1+

  • Stabilize

  • Documentation on commands, APIs and different workflows

  • Support upgrade/migration from 3.x

  • Centralized logging and tracing

  • Fully automatic volume management aka dynamic volume provisioning, +1 scaling

  • Automatic cluster formation

  • More native APIs for integration and workflows

UpgradING to 4.x

  • Avoid filesystem downtime
    • Existing clients can retain continuous access
  • GD1 and GD2 shipped in 4.0 and 4.1
    • GD1 planned to be removed from 4.2
    • No GD1 feature updates

UpgradING to 4.x

  • Rolling upgrade from 3.x to 4.0 using GD1
    • Install 4.0 on one node
    • Restart Gluster on the node
    • Heal volume
    • Continue on next node
  • Kill GD1 and start GD2 everywhere
  • Migrate/import data into GD2
  • GD2 picks up running bricks and daemons
  • Upgrade done without total downtime

Thank You!

Gluster 4.0 @ FOSDEM-2018

By Kaushal Madappa

Internal

Gluster 4.0 @ FOSDEM-2018

What's next in Gluster?

More from Kaushal Madappa