53
Ceph Object Store oder: Wie speichert man terabyteweise Dokumente Daniel Schneller [email protected] @dschneller

Ceph Object Store

Embed Size (px)

Citation preview

Page 1: Ceph Object Store

Ceph Object Storeoder: Wie speichert man terabyteweise Dokumente

Daniel Schneller [email protected] @dschneller

Page 2: Ceph Object Store

Wer sind wir?

Page 3: Ceph Object Store

@dschneller @drivebytesting

Page 4: Ceph Object Store

Was machen wir?

Page 5: Ceph Object Store
Page 6: Ceph Object Store

Wo kamen wir her?

Page 7: Ceph Object Store

Warum wollten wir da weg?

Page 8: Ceph Object Store

Wohin wollten wir?

Page 9: Ceph Object Store

Und der Weg?

Page 10: Ceph Object Store

Ceph Grundlagen

Page 11: Ceph Object Store

“Unified, distributed storage system designed for excellent performance, reliability and scalability”

Page 12: Ceph Object Store

Stark skalierbar

Commodity Hardware

Kein Single Point of Failure

Page 13: Ceph Object Store

Ceph Komponenten

Page 14: Ceph Object Store

OSD DaemonsObject Storage Device Daemons

Page 15: Ceph Object Store

CRUSH AlgorithmusIntelligente Objektverteilung ohne zentrale Metadaten

Page 16: Ceph Object Store

RADOSReliable Autonomous Distributed Object Store

Page 17: Ceph Object Store

Objekte

Page 18: Ceph Object Store

Data PoolsSammelbecken für Objekte mit gleichen Anforderungen

Page 19: Ceph Object Store
Page 20: Ceph Object Store

Placement Groups

Page 21: Ceph Object Store
Page 22: Ceph Object Store
Page 23: Ceph Object Store

MonitorsErste Anlaufstelle für Clients

Page 24: Ceph Object Store

Hardware Setup

Page 25: Ceph Object Store

Storage Virtualization

Bare Metal Hardware

Compute Virtualization

Network Virtualization

Virtual Infrastructure

Application

Page 26: Ceph Object Store

Storage Virtualization

Bare Metal Hardware

Compute Virtualization

Network Virtualization

Virtual Infrastructure

Application

Page 27: Ceph Object Store
Page 28: Ceph Object Store

Baseline BenchmarksErwartungen definieren

Page 29: Ceph Object Store

StorageDisk I/O pro Node

Page 30: Ceph Object Store
Page 31: Ceph Object Store
Page 32: Ceph Object Store

Netzwerk

Page 33: Ceph Object Store

IEEE 802.3ad != IEEE 802.3ad

> cat /etc/network/interfaces ... auto bond2 iface bond2 inet manual bond-slaves p2p3 p2p4 # interfaces to bond bond-mode 802.3ad # activate LACP bond-miimon 100 # monitor link health bond-xmit_hash_policy layer3+4 # use Layer 3+4 for link selection pre-up ip link set dev bond2 mtu 9000 # set Jumbo Frames

auto vlan-ceph-clust iface vlan-ceph-clust inet static pre-up ip link add link bond2 name vlan-ceph-clust type vlan id 105 pre-up ip link set dev vlan-ceph-clust mtu 9000 # Jumbo Frames post-down ip link delete vlan-ceph-clust address ... netmask ... network ... broadcast ... ...

Page 34: Ceph Object Store

IEEE 802.3ad != IEEE 802.3ad

[node01] > iperf -s -B node01.ceph-cluster [node02] > iperf -c node01.ceph-cluster -P 2 [node03] > iperf -c node01.ceph-cluster -P 2 ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address node01.ceph-cluster TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 10.102.5.11 port 5001 connected with 10.102.5.12 port 49412 [ 5] local 10.102.5.11 port 5001 connected with 10.102.5.12 port 49413 [ 6] local 10.102.5.11 port 5001 connected with 10.102.5.13 port 59947 [ 7] local 10.102.5.11 port 5001 connected with 10.102.5.13 port 59946 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 342 MBytes 286 Mbits/sec [ 5] 0.0-10.0 sec 271 MBytes 227 Mbits/sec [SUM] 0.0-10.0 sec 613 MBytes 513 Mbits/sec [ 6] 0.0-10.0 sec 293 MBytes 246 Mbits/sec [ 7] 0.0-10.0 sec 338 MBytes 283 Mbits/sec [SUM] 0.0-10.0 sec 631 MBytes 529 Mbits/sec

Page 35: Ceph Object Store

IEEE 802.3ad != IEEE 802.3ad

[node01] > iperf -s -B node01.ceph-cluster [node02] > iperf -c node01.ceph-cluster -P 2 [node03] > iperf -c node01.ceph-cluster -P 2 ------------------------------------------------------------ Server listening on TCP port 5001 Binding to local address node01.ceph-cluster TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 10.102.5.11 port 5001 connected with 10.102.5.12 port 49412 [ 5] local 10.102.5.11 port 5001 connected with 10.102.5.12 port 49413 [ 6] local 10.102.5.11 port 5001 connected with 10.102.5.13 port 59947 [ 7] local 10.102.5.11 port 5001 connected with 10.102.5.13 port 59946 [ ID] Interval Transfer Bandwidth [ 4] 0.0-10.0 sec 342 MBytes 286 Mbits/sec [ 5] 0.0-10.0 sec 271 MBytes 227 Mbits/sec [SUM] 0.0-10.0 sec 613 MBytes 513 Mbits/sec [ 6] 0.0-10.0 sec 293 MBytes 246 Mbits/sec [ 7] 0.0-10.0 sec 338 MBytes 283 Mbits/sec [SUM] 0.0-10.0 sec 631 MBytes 529 Mbits/sec ???

Page 36: Ceph Object Store

Messen!…und die Ergebnisse verstehen

Page 37: Ceph Object Store

CenterDevice

Page 38: Ceph Object Store

Gesamtarchitektur

Node 1

OSD 1

Node 2

Node 3

Node 4

OSD 48

Bare Metal

Ceph

Page 39: Ceph Object Store

Gesamtarchitektur

Node 1

OSD 1

Rados GW

Node 2

Rados GW

Node 3

Rados GW

Node 4

OSD 48

Rados GW

Bare Metal

Ceph

Page 40: Ceph Object Store

Gesamtarchitektur

Node 1

OSD 1

Rados GW

Node 2

Rados GW

Node 3

Rados GW

Node 4

OSD 48

Rados GW

VM 1

HAProxy

VM 1

HAProxy

VM 1

HAProxy

VM …

HAProxyVMs

Bare Metal

Ceph

Page 41: Ceph Object Store

Gesamtarchitektur

Node 1

OSD 1

Rados GW

Node 2

Rados GW

Node 3

Rados GW

Node 4

OSD 48

Rados GW

VMs

Bare Metal

VM 1

Ceph

HAProxy

CenterDevice

Swift

VM 1

HAProxy

CenterDevice

Swift

VM 1

HAProxy

CenterDevice

Swift

VM …

HAProxy

CenterDevice

Swift

Page 42: Ceph Object Store

Vorteile

Page 43: Ceph Object Store

Nachteile

Page 44: Ceph Object Store

Caveats

Page 45: Ceph Object Store

CephFSNot recommended for production data.

Page 46: Ceph Object Store

ScrubbingIntegrität hat ihren Preis. Aber man kann handeln!

Page 47: Ceph Object Store

Zukunft

Page 48: Ceph Object Store

Rados Gateway

Ceph Caching Tier

SSD based Journaling

10GBit/s Networking

Page 49: Ceph Object Store

Zum Schluss

Page 50: Ceph Object Store

Folien bei Slidesharehttp://www.slideshare.net/dschneller

Page 51: Ceph Object Store

Handout bei CenterDevicehttps://public.centerdevice.de/399612bf-ce31-489f-bd58-04e8d030be52

Page 52: Ceph Object Store

@drivebytesting @dschneller

Page 53: Ceph Object Store

EndeDaniel Schneller [email protected] @dschneller