# Tape storage (based on Ceph) {bdg-primary}`Service of Kiel University` {bdg-success}`Active` The CRC's tape storage (based on Ceph, an open-source software-defined storage platform) is your go-to hub for storing and sharing data with all researchers in the center. With everything in one convenient location, it's easy to access and share data within the CRC community. Filesystem | Size (total) ---------- | ------------ `\\datarepo01.rz.uni-kiel.de\SFB_1261`| Unlimited `\\tape-speicher.rz.uni-kiel.de\SFB_1261` (alternative) | Unlimited ## Access Instructions for accessing the tape storage can be found [here](https://www.rz.uni-kiel.de/en/hints-howtos/connecting-a-network-share?set_language=en). :::{note} {octicon}`key;1em;sd-text-info` Access is only permitted for members of CRC1261 and must be requested. {octicon}`lock;1em;sd-text-info` The service can only be accessed from the university network. Therefore, a VPN connection is required for connections from locations outside the university (e.g., from home). For more information, see the computing centre's official [service information](https://www.rz.uni-kiel.de/en/our-portfolio/storage/network-drive-file-server). ::: ### Debian To mount the tape storage on Debian-based systems, follow the steps below: 1. Install the necessary dependencies. For Debian-based systems, you can use the following command to install `cifs-utils` and `keyutils`: ```sh sudo apt-get install cifs-utils keyutils ``` 2. Create a directory where you want to mount the tape storage. In this example, we will create the directory `/mnt/SFB_1261`: ```sh sudo mkdir /mnt/SFB_1261 ``` 3. Use the mount command to mount the tape storage to the newly created directory. Replace `suabc123` with your own username: ```sh sudo mount -t cifs //datarepo01.rz.uni-kiel.de/SFB_1261 /mnt/SFB_1261 -o uid=1000,gid=1000,rw,user,username=suabc123,domain=uni-kiel.de ``` Please note, that the legacy filesystem URI `//tape-speicher.rz.uni-kiel.de/SFB_1261` will work as well. That's it! The tape storage is now mounted and accessible at `/mnt/SFB_1261`. To unmount the tape storage, use the `umount` command: ```sh sudo umount /mnt/SFB_1261 ``` ## Backups Daily incremental backups are performed for the last 60 days to ensure data safety. ## Snapshots The tape storage offers snapshots for self-service restore. This section provides a step-by-step guide for accessing and utilising snapshots for self-service restore operations. 1. **Accessing Snapshots Directory:** Navigate to the `.snap` directory where the Snapshots are stored. This directory is typically hidden from standard directory listings. ```bash cd .snap ``` 2. **Listing Snapshots:** List the contents of the `.snap` directory to view available snapshots: ```bash ls -l ``` 3. **Identifying Snapshots:** Snapshots are represented as directories within the `.snap` directory. Each snapshot directory contains timestamped folders corresponding to scheduled backups. 4. **Accessing Snapshot Contents:** Enter the desired snapshot directory to access its contents. Opening a directory might take a while, depending on the snapshot size. ```bash cd _scheduled-2024-03-04-14_00_00_UTC_1099552913927 ``` 5. **Restoring Files:** Once inside the snapshot directory, use the `cp` command to copy the desired files back to the current directory, enabling restoration of data to its original location. ```bash cp /path/to/destination ``` ## Recommendations - The top level of the project network drive contains directories for each project group. It is important that you add your data to the corresponding folder of your project group to ensure proper organisation and access for your colleagues. This will also help keep the project drive tidy and efficient for all users. - Please follow the established [guidelines for file and folder naming](../best_practices/file_naming.md) to ensure efficient and organised data management. - When storing large data, it is recommended to compress the files to optimise the storage space. If you have too large data, please reach out to the CRC staff for further assistance.