SHOCLOUD: your private personal cloud

1. Introduction: The Cloud Paradigm and the Imperative of Data Sovereignty

Cloud computing represents a paradigm shift in the way computational resources—such as processing power, storage, networking, and software—are provisioned, managed, and consumed. At its core, cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction. This definition, formalized by the National Institute of Standards and Technology (NIST), encapsulates the essential characteristics, service models, and deployment models that define the cloud ecosystem. The foundational principle of cloud computing lies in the abstraction and virtualization of physical infrastructure. Cloud computing fundamentally reconfigures the relationship between end users and digital resources by relocating data storage, application logic, and computational power from local, user-owned hardware to remote, internet-accessible infrastructure managed by specialized providers. This architectural shift ensures that digital assets—ranging from documents and databases to complex enterprise applications—are no longer tethered to a specific physical device or location. Instead, they become persistently available to authorized users anywhere with a stable internet connection. This paradigm delivers profound operational, strategic, and economic advantages for both individual professionals and organizations. The core utility of the cloud lies in its ability to abstract infrastructure complexity while guaranteeing high availability, scalability, and resilience. By offloading data and processing tasks to geographically distributed data centers, cloud platforms eliminate the traditional constraints of local hardware limitations, such as finite storage capacity, processing bottlenecks, or device failure. For instance, a marketing professional working on a campaign can access the latest version of a presentation stored in Google Drive from a laptop at the office, a tablet during a client meeting, or a smartphone while traveling—all without manually transferring files or worrying about version control.

This seamless continuity enhances productivity and supports modern, mobile-first workstyles. In recent years, cloud computing has transitioned from a niche academic paradigm to a backbone of digital infrastructure, enabling on-demand, scalable, and elastic services. A variety of survey and review articles document the maturation of cloud architectures, challenges, and research directions (e.g., Omurgonulsen et al., “Cloud Computing: A Systematic Literature Review and Future Agenda” [18]; “A Comprehensive Survey on Cloud Computing” [0]). These works highlight how cloud has become central to innovation, digital transformation, and new business models, even as issues of privacy, data control, and governance emerge as critical obstacles. In parallel, use cases in domains such as industrial IoT, e-health, and scientific computing increasingly rely on cloud infrastructures or hybrid models. Dritsas et al. provide a recent survey in industrial IoT contexts, emphasizing the interplay of latency, scalable storage, edge/cloud synergy, and data security (2025) [4]. In eHealth, Hu & Bai (2014) systematically reviewed privacy, hybrid cloud frameworks, and security control mechanisms for healthcare data in the cloud [21].

Recent literature increasingly emphasizes the importance of data sovereignty in these contexts. Von Scherenberg et al. (2024) analyze the implications of data sovereignty for information systems design, highlighting challenges in governance, control, and compliance [28]. Abbas et al. (2024) conceptualize data sovereignty from a social contract perspective, suggesting that effective control over data must consider stakeholder agreements and societal expectations [29]. In the health sector, Cordes et al. (2024) explore tensions between digital health initiatives and indigenous data sovereignty, revealing competing interests and ethical considerations in sensitive data management [30]. Belli et al. (2024) investigate data sovereignty and cross-border data transfers as central elements of digital transformation in BRICS countries, highlighting regulatory, technical, and organizational implications [31]. Irekponor (2025) addresses the design of resilient AI architectures for predictive energy finance systems, factoring in data sovereignty, adversarial threats, and policy volatility [32]. Tewari & Chitnis (2024) focus on AI and multi-cloud compliance, presenting strategies to safeguard data sovereignty in complex distributed infrastructures [33].

1.1 Scope of this Review: ownership and sovereignty of your data

Motivated by these trends and tensions, SD Companies now introduces SHOCLOUD, a solution providing a private and very secure cloud tailored for personal use. SHOCLOUD enables individuals to safely store and manage their digital content, including photos, professional documents, the many passwords and PINs required for various services, and daily notes used for everyday tasks such as shopping lists, website references, sketches, or reminders. This turnkey personal cloud is designed to combine privacy, data sovereignty, and usability in a single platform. The project was just launched on Kickstarter for its fundraising: to support the project you can visit this page: https://www.kickstarter.com/projects/sdcompanies/shocloud.

SHOCLOUD architecture
Explanation about how SHOCLOUD can work

1.2 Structure of this Review: All around SHOCLOUD

We begin by defining the cloud concept and summarizing its enabling technologies. Then we compare prominent architectural and algorithmic approaches, discussing pros and cons with reference to scientific literature. After reviewing typical scientific and industrial use cases of cloud systems, we introduce the engineering details and innovations of SHOCLOUD, and conclude with a positioning of SD Companies as a partner for advanced system development.

2. Foundations of Cloud: Definitions, Architectures, and Enabling Technologies

2.1 What Is “The Cloud”?

At its core, “cloud computing” is often defined as a paradigm of parallel and distributed computing composed of a network of interconnected virtualized resources, dynamically provisioned and presented as a unified computing platform under Service Level Agreements (SLAs) (as in Buyya et al. quoted in many surveys) [0].
A systematic review (Omurgonulsen et al.) classifies cloud literature and proposes future research agendas across resource management, service models, and governance [18]. Recent studies provide further insights into cloud storage management and security. Khan et al. [1] present a taxonomy of cloud storage costs, analyzing factors that impact pricing and efficiency in both public and private cloud systems. Mahida [3] surveys secure data outsourcing techniques, including cryptographic and redundancy-based methods, to ensure data integrity and confidentiality. Umar et al. [6] propose a chaos-based image encryption scheme to protect sensitive multimedia content in cloud storage, demonstrating robust performance against common attack vectors. Gudimetla [7] provides a comprehensive overview of encryption strategies specifically designed for cloud storage environments, emphasizing practical implementations and trade-offs in computational overhead and security guarantees.

The standard taxonomy divides into service models (IaaS, PaaS, SaaS) and deployment models (public, private, hybrid, community). For instance, IaaS offers raw compute, storage, network resources; PaaS offers middleware and runtime environments; SaaS offers full applications to end users. Deployment choices trade control vs. economies of scale. In practice, many organizations adopt hybrid or multi-cloud strategies. Omurgonulsen et al. discuss research gaps in hybrid, federated, and cross-cloud interoperability [18].

2.2 Enabling Technologies: Virtualization, Containers, Orchestration, Storage, Networking

Modern cloud computing relies on a stack of interdependent technologies—each playing a crucial role in scalability, flexibility, and performance. Virtualization abstracts physical resources to create isolated environments, while containers offer lightweight, portable instances ideal for microservices deployment. Orchestration tools coordinate these containers, optimizing resource allocation and ensuring reliability across distributed infrastructures. Recent studies emphasize secure and adaptive orchestration frameworks for real-time and industrial contexts [8][9][10][14]. Network virtualization extends these principles to communication layers, enabling programmable and secure virtual networks [13][15], supporting architectures such as SDN (Software-Defined Networking), NFV (Network Function Virtualization), and network slicing in 5G systems.

Several layers of enabling technologies underlie cloud systems:

  • Virtualization / Hypervisors: Early clouds relied on full VM virtualization (e.g., Xen, KVM). Virtual machines allow resource partitioning, isolation, and multiplexing of physical servers.
  • Containers and Container Orchestration: With the advent of Docker, Kubernetes, and related systems, lightweight isolation and orchestration became mainstream. Containers reduce overhead compared to full VMs and enable microservices architectures.
  • Software-Defined Networking (SDN) and Network Virtualization: These technologies decouple physical networks from logical overlays, enabling flexible network topologies, traffic management, and isolation.
  • Distributed Storage and Object Storage: Systems such as Ceph, Swift, and S3-compatible object stores provide scalable, fault-tolerant data repositories. Block storage (e.g., via distributed file systems) and distributed databases complement these architectures.
  • Orchestration and Scheduling Algorithms: Cloud systems employ advanced scheduling, load balancing, autoscaling, resource allocation, and placement algorithms to manage heterogeneous workloads and dynamic operational demands.
  • Edge and Fog Paradigms: To mitigate latency and locality constraints, fog or edge computing pushes computation and storage closer to end devices (Mouradian et al., “A Comprehensive Survey on Fog Computing” [22]).

For instance, Mouradian et al. survey fog computing and point out that pure centralized cloud can be unsuitable for latency-sensitive or location-constrained tasks; thus fog acts as a complementary layer [22].

3. Comparative Analysis: Approaches, Trade-offs, and Challenges

3.1 Centralized Cloud vs. Edge / Fog vs. Hybrid Models

Centralized cloud computing denotes an architecture in which computational resources, data storage, and application services are consolidated within large-scale, remotely located data centers, accessible over the internet. Edge computing, by contrast, processes data directly on or near end-user devices (e.g., sensors, smartphones), minimizing latency. Fog computing extends this paradigm by distributing intermediate compute nodes—such as routers or gateways—between the edge and the cloud, forming a hierarchical continuum. Hybrid systems strategically combine these approaches: leveraging the cloud’s elasticity and storage capacity while utilizing edge and fog layers for real-time, low-latency processing. This integration optimizes performance in distributed applications requiring both local responsiveness and centralized coordination.
Centralized public cloud brings strong economies of scale and ease of management, but may suffer latency, bandwidth, and data sovereignty issues. Fog/edge complements by placing compute near users, reducing latency and traffic to the core cloud. But fog nodes are resource-constrained and management is more complex. Hybrid models attempt to balance: latency-critical tasks handled near edge, bulk analytics in central cloud.
Dritsas et al. (2025) analyze IIoT systems and argue that hybrid cloud architectures combining centralized, distributed, and edge components are needed to meet real-time constraints while preserving scalability [4]. However, hybrid systems complicate orchestration, data consistency, and security.

3.2 Resource Allocation, Scheduling, and Load Balancing Algorithms

Algorithmic techniques for resource scheduling, load balancing, and autoscaling are central to cloud performance. Many proposals in literature expand classical scheduling (e.g. genetic algorithms, ant colony, reinforcement learning, K-tree-like partitioning) to dynamic cloud environments. For example, some works adapt K-tree indexing or partitioning for resource clusters (analogous to extended k-tree in graph partitioning) [16].
The trade-off is between optimal allocation, responsiveness, and computational overhead. Some advanced approaches integrate machine learning for predictive scaling. But these often risk overfitting or poor generalization in variable workloads.
Recent research emphasizes adaptive and bio-inspired optimization techniques to address these challenges. For instance, energy-efficient load balancing based on rock hyrax optimization demonstrates improved convergence speed and resource utilization under dynamic cloud conditions ([17]). Comprehensive reviews of task scheduling and balancing strategies highlight that hybrid approaches combining heuristic and AI-based methods yield better performance in heterogeneous cloud environments ([19]). Moreover, SDN-based architectures introduce programmability and real-time flow management, enabling more efficient task migration and reduced latency across virtualized infrastructures ([20]). These innovations collectively aim to achieve a more sustainable, scalable, and intelligent cloud resource management framework.

3.3 Security, Privacy, and Data Protection in the Cloud

One of the most discussed challenges is enforcing data security, integrity, confidentiality, and privacy in the cloud. Parast et al. provide a taxonomy of cloud security threats (multi-tenancy risk, side channels, malicious insiders, data leakage) and mitigation techniques over the past decade [2]. A complementary survey by Hassan et al. (2022) surveys data protection techniques such as encryption, access control, data anonymization, and secure multi-party computation in cloud settings [12].
One particular challenge is **cryptographic enforcement of data policy** (e.g., encrypted storage with search, attribute-based encryption, functional encryption). These methods add computational and management complexity. Another is **key management and trust**: who holds keys, how to rotate them, and how to prevent unauthorized access, including in multi-cloud or federated designs.

3.4 Cloud Migration and Interoperability Challenges

For organizations migrating legacy systems to the cloud or across clouds, Fahmideh et al. propose a survey and evaluation framework of migration strategies and open challenges (e.g. rearchitecting, dependency resolution, data migration, vendor lock-in, latency, compliance) [24]. One chronic risk is vendor lock-in: once applications are heavily tied to a cloud’s proprietary APIs or services, shifting becomes costly.
Omurgonulsen et al. also emphasize cross-cloud interoperability, portability, and federated governance as underexplored research challenges [18].

3.5 Summary of Comparative Advantages & Drawbacks

This is how the different scenarios appear before the launching of SHOCLOUD:

Approach Strengths Weaknesses / Challenges
Centralized Cloud (public) Scale, ease, cost amortization Latency, sovereignty loss, potential vendor lock-in, network dependency
Private / On-premises Cloud Full control, sovereignty, predictable latency Higher capital cost, more maintenance burden, limited scale
Edge / Fog Low latency, locality, real-time responsiveness Constrained resources, fragmented management, consistency issues
Hybrid / Multi-Cloud Best-of-all trade-offs Orchestration complexity, consistency, security, data movement overhead

Many scientific surveys and reviews corroborate that no one-size-fits-all solution exists; optimal cloud design depends on application context, trust boundaries, performance needs, and regulatory constraints (e.g. Omurgonulsen et al. [18], Dritsas et al. [4]).

4. Scientific and Industrial Applications of Cloud Technologies

4.1 e-Science and Scientific Workflows in the Cloud

The scientific community increasingly runs large workflows, data pipelines, and simulation tasks on cloud infrastructure. Zhou et al. (2014) present a taxonomy of **eScience-as-a-Service**, reviewing life sciences, physics, climate, social science workloads, and highlighting challenges in data staging, performance unpredictability, security, and cost control [23].
Platforms like Pegasus (Deelman et al.) are used to manage large-scale scientific workflows over distributed cloud or hybrid resources; Deelman is a prominent researcher in this area [27].
Other experimental studies demonstrate scaling genomic pipelines, climate modeling, and astronomical data analysis on public clouds, hybrid clouds, or federated systems. The advantages include on-demand elasticity, pay-as-you-go financing, and flexibility to burst into more capacity.

SHOCLOUD architecture
Electronics used for several tests for the conceptual system of SHOCLOUD

4.2 Cloud in Industrial IoT, Smart Manufacturing, and Innovation

In industrial IoT, cloud provides scalable ingestion, analytics, and control loops. The Dritsas et al. survey explores how cloud + edge architectures are leveraged in IIoT for predictive maintenance, anomaly detection, energy optimization, and supply chain visibility [4].
Many industrial players integrate cloud-based analytics, digital twins, and remote monitoring. Vendors such as Siemens, GE Digital, AWS IoT, Microsoft Azure IoT, and Huawei Cloud provide end-to-end platforms combining device connectivity, cloud storage, and AI.
On the product innovation side, cloud enables remote firmware updates, over-the-air provisioning, scalable backend services, and integration with SaaS ecosystems.

4.3 Cloud in Consumer & Productivity Apps, Self-Hosting Trends

In consumer and enterprise contexts, cloud apps such as file sync (Dropbox, Google Drive), collaboration (Office 365, Google Workspace), password managers (Vault services), and note-taking sync are ubiquitous. The rise of self-hosting is a reaction to privacy, cost, and control concerns. Community-driven guides (e.g. Self-Hosting Guide on GitHub) catalog dozens of typical home / small-scale cloud services to self-host (Nextcloud, Vaultwarden, Immich, etc.) [3].
Real-world experiments show Raspberry Pi hosting of Nextcloud or personal cloud services as a low-cost alternative to commercial clouds (e.g. “Assessing the viability and dependability of Nextcloud deployed on Raspberry Pi” [5]).
The trade-offs—especially performance, availability, and remote connectivity—are well discussed in self-hosting communities and hacker forums (e.g. discussions on HN) [11]. The growing relevance of self-hosted and on-premise cloud solutions reflects a strategic shift in how organizations and individuals perceive control, privacy, and long-term sustainability of digital assets. While public cloud services provide scalability and simplicity, numerous studies now emphasize the renewed importance of locally managed infrastructures as a foundation for data sovereignty and resilience. Ali et al. (2024) provide a systematic comparison between on-premise and cloud services, revealing that the former offers superior control over latency, data lifecycle management, and regulatory compliance, particularly for sectors dealing with confidential or mission-critical information ([25]). Similarly, Mušić et al. (2024) demonstrate that lightweight on-premise PaaS models can accelerate digital transformation while reducing dependency on large-scale cloud providers, promoting cost efficiency and environmental sustainability ([26]). Furthermore, Aslanli (2024) highlights the security advantages of on-premise architectures in industrial IoT environments, where combining local computation with selective cloud interaction strengthens system integrity and mitigates cyber threats ([27]). In line with these findings, SD Companies has chosen to design SHOCLOUD as a self-hosted, privacy-centric platform, empowering users to maintain full ownership of their data. This approach ensures that personal documents, multimedia archives, and sensitive credentials remain accessible yet secure — without dependency on third-party infrastructures. The resulting architecture merges user autonomy with professional-grade reliability, demonstrating how next-generation personal clouds can align convenience, ethics, and digital sovereignty within one coherent solution.

5. The SHOCLOUD Innovation: Technical Architecture and Engineering Design

5.1 Motivation and Positioning

The SHOCLOUD project is motivated by the need to combine the ease and scalability of cloud services with strong guarantees of data sovereignty, modularity, portability, and privacy. Many organizations and individuals are reluctant to entrust critical data or applications to large cloud providers due to lock-in risk, opaque governance, and potential jurisdictional exposure. SHOCLOUD aims to deliver a turnkey, self-contained, federatable cloud node that can integrate with centralized or federated cloud networks while preserving sovereignty.

SHOCLOUD architecture
Example of the electronic parts behind the SHOCLOUD system

In essence, SHOCLOUD is designed as a modular, federated cloud appliance with built-in support for Nextcloud, Joplin, Immich, and Vaultwarden functions (file sync, notes, media/photo sync, password vault) — providing the convenience of modern SaaS but under full control of the user or organization.

5.2 Overall System Architecture](sic)

The SHOCLOUD node is architected as a layered, containerized, orchestrated system. Its major subsystems include:

  1. Infrastructure Layer: Underlying hardware (compute, storage, networking). It may be a small server, edge device, or even a rack. It includes local storage (e.g. NVMe, SSD, RAID arrays) and backup link to remote nodes or object stores.
  2. Container / Orchestration Layer: SHOCLOUD leverages container technologies (e.g. Docker or Podman) and orchestration (e.g. Kubernetes / K3s or similar lightweight orchestrator). This enables workload isolation, lifecycle control, scaling, and updates.
  3. Service Layer: Within containers or service pods, SHOCLOUD hosts multiple services:
    • Nextcloud (file sync, collaboration, web UI)
    • Joplin (note sync server)
    • Immich (photo/media backup & gallery)
    • Vaultwarden (self-hosted password vault)
    • Auxiliary services (e.g. reverse proxy / ingress, certificate manager, backup agent, monitoring)
  4. Federation & Interconnect Layer: SHOCLOUD supports federation protocols (e.g. ActivityPub, WebDAV federation, or custom APIs) and can optionally connect to a central aggregator or peers. Data exchange, replication, or federation is encrypted and controlled by policy.
  5. Control & Management Layer: A management plane handles orchestration, updates, cryptographic key management, local policy, and possibly centralized provisioning (for managed nodes).
  6. Security & Cryptography Layer: All sensitive data (files, notes, vault contents) are encrypted at rest using robust algorithms (e.g. AES-GCM, age, or other modern authenticated encryption). Key storage may be in secure enclave, hardware security module, or user-managed key vault. The system may support end-to-end encryption for selected modules (e.g. Vaultwarden, Joplin). Access control, user authentication, and audit logging are integral.

The architecture supports a mix of local-first use and optionally federated or hybrid deployment. Users can start with a standalone SHOCLOUD node (private cloud) and later federate across multiple nodes for redundancy or sharing.

5.3 Data Flow, Storage, and Synchronization Algorithms

Internally, SHOCLOUD uses differential synchronization, content-addressable storage (CAS), deduplication, and caching to minimize bandwidth and storage usage.

  • File sync engine (Nextcloud): The Nextcloud backend uses a file change journal, delta sync, and chunking. It may leverage technologies like rsync-like diffs, block-level deduplication, delta encoding and compression, and checksums to detect unchanged segments.
  • Media sync (Immich): Immich specializes in image and video backup and gallery, possibly using optimized upload via thumbnails, versioning, and deduplication (e.g. by perceptual hashing).
  • Note sync (Joplin): Joplin may use encrypted notebooks and sync via WebDAV or custom sync protocol; SHOCLOUD ensures secure channels and merge conflict resolution via CRDT or operational transformation (OT) techniques.
  • Vault data (Vaultwarden): Password vaults often use encrypted JSON storage, seeded from the Bitwarden protocol. Vaultwarden synchronizes via encrypted API endpoints.
  • Federation / replication: Between SHOCLOUD nodes, replication uses append-only logs, Merkle-tree summarization, or CRDT-based merging to ensure consistency and detect divergence. Sync can be batched, incremental, and optionally selective by namespace.

Scheduling of background sync and maintenance tasks is governed by an internal scheduler based on priority, load, time windows, and resource availability. For example, large backups may run during off-peak intervals, whereas interactive sync is prioritized.

SHOCLOUD architecture
Electronic parts behind the SHOCLOUD system

5.4 Security, Key Management, and Sovereignty Controls

The security architecture is critical:

  • Encryption at rest: Data is encrypted in each module using strong symmetric keys (e.g. AES-256-GCM).
  • Key management: Keys are derived or stored in a secure vault (e.g. hardware TPM, secure element, or encrypted on-disk key vault).
  • Access control / authentication: Users authenticate via robust mechanisms (password + 2FA, OAuth, WebAuthn) enforced per module. Access policies may include attribute-based access control (ABAC).
  • End-to-end encryption support: For some modules (e.g. Joplin, Vaultwarden), SHOCLOUD can provide E2E encryption where only the client knows the decryption key. The server only handles encrypted blobs.
  • Audit logging and tamper-evidence: All operations are logged with cryptographic integrity (e.g. merkle logs, append-only logs).
  • Federated trust boundaries: When replicating or federating between nodes, each node holds its own keys; cross-node access is mediated by inter-node trust policies and encryption. No central entity sees plaintext data unless explicitly configured.
  • Update integrity and secure boot: The device may support verified boot, firmware attestation, and signed updates to resist supply-chain attacks.

5.5 Operational & Engineering Considerations

  • Resilience & redundancy: SHOCLOUD supports RAID, snapshotting, local backup to remote nodes or off-site storage, and failover.
  • Performance tuning: Caching, SSD tiering, memory buffers, and prioritization of interactive traffic over background batch jobs.
  • Resource scaling: SHOCLOUD may allow scaling of individual components (e.g. multiple Nextcloud worker pods) or offloading heavy tasks to a “cloud node.”
  • User-friendly orchestration and UI: The control plane includes a web UI or CLI for deployment, updates, monitoring, and federation setup.
  • Monitoring, telemetry, and alerting: Metrics (CPU, I/O, network, sync queues) are monitored, with alerts and dashboards.
  • Upgrade path & modular extensibility: Modules may be upgraded independently; new services can be added via container modules under governance.

In sum, SHOCLOUD is envisioned as a sovereign cloud appliance combining modular services, federation, robust security, and user-friendly management.

6. Functional Capabilities Brought by Nextcloud, Joplin, Immich, Vaultwarden

Below, we elaborate the principal functionalities that SHOCLOUD integrates by virtue of including these four modules:

6.1 Nextcloud (File Sync, Collaboration, Sharing)

Nextcloud is a mature open-source platform offering file synchronization, sharing, web UI, collaboration (document editing, calendars, contacts), group management, and plugin extensibility. In SHOCLOUD, Nextcloud functions as the core file system and collaboration backbone.
Typical Nextcloud features integrated include:

  • Multi-device sync (desktop, mobile) via clients
  • File versioning, trash, revision control
  • WebDAV, Collabora/ONLYOFFICE integration for online document editing
  • Group sharing, link-based sharing with expiration and password
  • Plugins and apps (e.g. calendars, contacts, project management)
  • Access control, quotas, storage management
  • Federation with other Nextcloud or WebDAV servers
  • External storage support (object stores, S3 backend)

In the SHOCLOUD architecture, Nextcloud is containerized, managed by the orchestration layer, and tied into the unified control, backup, and encryption infrastructure.

6.2 Joplin (Note-taking & Sync)

Joplin is a note-taking application that supports end-to-end encryption, notebooks, tags, attachments, and synchronization across devices (via WebDAV, or a custom sync API). In SHOCLOUD, the Joplin server module enables:

  • Secure, encrypted note synchronization
  • Conflict resolution (via OT/CRDT)
  • Attachment storage under encryption
  • Web and mobile client compatibility
  • Tagging, full-text search, metadata indexing
  • Multi-user or shared notebooks (if desired)

The SHOCLOUD design ensures that only encrypted blobs reach the server; key derivation remains client-side if E2E is used.

6.3 Immich (Media / Photo Backup & Gallery)

Immich is designed for efficient backup of photos and videos from mobile devices, with gallery presentation, deduplication, and optional sharing. In SHOCLOUD, the Immich service offers:

  • Automatic upload/sync of media from client apps
  • Thumbnail generation, previews, album views, metadata extraction
  • Duplicate detection using perceptual hashing / fingerprinting
  • Versioning and incremental backup
  • Web and mobile presentation UI
  • Privacy-aware sharing and access control
  • Local storage optimizations (e.g. tiered storage, caching)

Integration with the unified storage, backup, encryption and orchestration layers ensures media is managed securely and efficiently.

6.4 Vaultwarden (Self-Hosted Password Vault)

Vaultwarden is a lightweight, self-hosted implementation of the Bitwarden-compatible server, allowing users to store and sync passwords, secrets, and notes securely. In SHOCLOUD, Vaultwarden provides:

  • Encrypted vault storage compliant with Bitwarden protocol
  • Multi-device sync and web client compatibility
  • Organization-level sharing (teams, collections)
  • 2FA, user management, access control
  • API endpoints, browser plugins, mobile clients
  • Audit logging, versioning, and recovery options

Because Vaultwarden is already end-to-end encrypted by design, SHOCLOUD wraps it within its orchestration, backup, and federation layers without exposing cleartext.

Collectively, these four modules yield a robust personal or organizational “cloud stack” for files, notes, media, and secrets — all operating under a controlled, sovereign fabric.

7. Advantages, Limitations, and Risk Mitigations of SHOCLOUD

7.1 Key Advantages

  • Sovereignty and control: The data is stored under user/organization control; no opaque third-party provider controls keys or access.
  • Federation and modularity: SHOCLOUD can interoperate with other SHOCLOUD nodes or federated systems, enabling controlled sharing or failover.
  • Integrated turnkey stack: Users get a pre-integrated stack (Nextcloud, Joplin, Immich, Vaultwarden) instead of stitching components manually.
  • Security-centric design: Encryption, key management, audit trails, secure updates built-in.
  • Scalable and extensible: Modular services can scale, and new modules added over time.
  • Offline/local-first support: Even if the network is interrupted, the local node continues to operate; sync resumes later.
  • Hybrid deployment flexibility: You can run solely the SHOCLOUD node, or connect to a central “cloud aggregator” network for burst capacity, backup, or federation.

7.2 Potential Limitations and Mitigations

  • Hardware constraints: Performance depends on the underlying hardware; for I/O-heavy workloads, SSDs, NVMe, or caching tiers are needed.
    Mitigation: Tiered storage, caching, or offloading heavy tasks to a remote cloud node.
  • Network bottlenecks: For remote access or federation, bandwidth and latency can limit throughput.
    Mitigation: Differential sync, compression, deduplication, scheduling, and throttling.
  • Complexity of management: Operating a node with multiple modules, updates, backups, and federation adds complexity.
    Mitigation: Provide a robust UI/CLI, automation, managed provisioning, and auto-updates.
  • Consistency and conflict resolution: In federated or intermittent connectivity environments, data conflicts may arise.
    Mitigation: Use CRDTs or robust merge protocols, conflict resolution workflows, and versioning.
  • Security risks: While encryption helps, vulnerabilities (e.g. side channels, software bugs) remain.
    Mitigation: Regular security audits, hardened containers, minimal attack surface, isolation, and secure boot.
  • Scaling limits: Under very heavy enterprise loads, federated nodes must coordinate to scale.
    Mitigation: Allow offload to centralized “cloud aggregator” or auxiliary nodes for intense workloads.

Overall, many of these trade-offs reflect the broader cloud/edge trade-offs discussed in academic literature (e.g. Omurgonulsen et al. [18], Parast et al. [2], Dritsas et al. [4]).

8. Role of SD Companies as a Technology Partner and Ecosystem Enabler

SD Companies, through SHOCLOUD, positions itself not merely as a vendor but as an **engineering partner and trusted integrator** in the domain of sovereign, modular cloud infrastructure. The innovation lies not just in shipping a product but enabling organizations to adopt next-generation cloud-native, privacy-aware architectures.

Key roles that SD Companies can assert:

  1. R&D and architectural leadership: Deep domain knowledge in cloud, orchestration, cryptography, and federation.
  2. Integration and customization: Adapting the SHOCLOUD base to vertical use cases (e.g. healthcare, manufacturing, research clusters).
  3. Certified modules and extensions: Developing validated plugins, connectors (e.g. to enterprise systems, edge devices, sensors).
  4. Managed services and support: Offering supervision, monitoring, upgrades, and incident support.
  5. Interoperability and federation governance: Enabling consortium-based federation, cross-node trust, and compliant exchange (e.g. under GDPR or local data regulation).
  6. Security audits & compliance: Validating the system against industry standards (e.g. ISO 27001, SOC, or domain-specific regulations).
  7. Ecosystem enablement: Partnering with hardware vendors, IoT integrators, app developers to build vertical solutions atop SHOCLOUD.

Because the SHOCLOUD architecture is modular and open, SD Companies can remain a continuous enabler of innovation — not just delivering a static product but evolving the platform over time, assisting adoption in novel domains, and ensuring that clients remain at the frontier of cloud-native, sovereign, privacy-aware infrastructure.

References

[0] Md. Imran Alam, Manjusha Pandey, Siddharth S. Rautaray. “A Comprehensive Survey on Cloud Computing.” (2015). https://doi.org/10.5815/ijitcs.2015.02.09

[1] Khan, Akif Quddus, et al. “Cloud storage cost: a taxonomy and survey.” World Wide Web 27.4 (2024): 36. https://doi.org/10.1007/s11280-024-01273-4

[2] F. K. Parast et al. “Cloud computing security: A survey of service-based models.” (2022). https://doi.org/10.1016/J.COSE.2021.102580

[3] Mahida, Ankur. “Secure data outsourcing techniques for cloud storage.” International Journal of Science and Research (IJSR) 13.4 (2024): 181-184. https://doi.org/10.21275/sr24402065432

[4] E. Dritsas et al. “A Survey on the Applications of Cloud Computing in IIoT.” MDPI (2025). https://doi.org/10.3390/bdcc9020044

[5] Girish N, Krupananda S, Jagadish P. Patel et al. “Assessing the Viability and Dependability of Nextcloud Deployed on Raspberry Pi.” (2024). (IJCRT). https://www.ijcrt.org/papers/IJCRT2405612.pdf

[6] Umar, Talha, Mohammad Nadeem, and Faisal Anwer. “Chaos based image encryption scheme to secure sensitive multimedia content in cloud storage.” Expert Systems with Applications 257 (2024): 125050. https://doi.org/10.1016/j.eswa.2024.125050

[7] Gudimetla, Sandeep Reddy. “Data encryption in cloud storage.” International Research Journal of Modernization in Engineering Technology and Science 6 (2024): 2582-5208. https://doi.org/10.56726/IRJMETS51187

[8] Mahavaishnavi, V., R. Saminathan, and R. Prithviraj. “Secure container orchestration: A framework for detecting and mitigating orchestrator-level vulnerabilities.” Multimedia Tools and Applications 84.17 (2025): 18351–18371. https://doi.org/10.1007/s11042-024-19613-x

[9] Struhár, Václav, et al. “Hierarchical resource orchestration framework for real-time containers.” ACM Transactions on Embedded Computing Systems 23.1 (2024): 1–24. https://doi.org/10.1145/3592856

[10] Alamoush, Ahmad, and Holger Eichelberger. “Open source container orchestration for Industry 4.0–requirements and systematic feature analysis.” International Journal on Software Tools for Technology Transfer 26.5 (2024): 527–550. https://doi.org/10.1007/s10009-024-00767-w

[11] “Ask HN: Self-hosting in 2023: Nextcloud on Linode, or…?” Hacker News (2023). https://news.ycombinator.com/item?id=34503176

[12] J. Hassan et al. “The Rise of Cloud Computing: Data Protection, Privacy.” (2022). https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9197654/

[13] Geetha, A., and Punam Kumari. “Network virtualization.” In Towards Wireless Heterogeneity in 6G Networks. CRC Press, 2024, pp. 226–242. https://doi.org/10.1201/9781003369028-12

[14] Barletta, Marco, et al. “Criticality-aware monitoring and orchestration for containerized Industry 4.0 environments.” ACM Transactions on Embedded Computing Systems 23.1 (2024): 1–28. https://doi.org/10.1145/3604567

[15] Alnaim, Abdulrahman K. “Securing 5G virtual networks: A critical analysis of SDN, NFV, and network slicing security.” International Journal of Information Security (2024): 1–21. https://doi.org/10.1007/s10207-024-00900-5

[16] Wang, Na, et al. “An Efficient and Secure Spatial Keyword Ciphertext Retrieval Scheme Based on Cloud-Fog Collaboration.” IEEE Transactions on Information Forensics and Security (2025). https://doi.org/10.1109/TIFS.2025.3618394

[17] Singhal, Saurabh, et al. “Energy efficient load balancing algorithm for cloud computing using rock hyrax optimization.” IEEE Access, vol. 12, 2024, pp. 48737–48749.
https://doi.org/10.1109/ACCESS.2024.3380159.

[18] M. Omurgonulsen et al. “Cloud Computing: A Systematic Literature Review and Future Agenda.” (2021). https://doi.org/10.4018/JGIM.20211101.oa40

[19] Devi, Nisha, et al. “A systematic literature review for load balancing and task scheduling techniques in cloud computing.” Artificial Intelligence Review, vol. 57, no. 10, 2024, p. 276.
https://doi.org/10.1007/S10462-024-10925-W.

[20] Mahdizadeh, Masoumeh, Ahmadreza Montazerolghaem, and Kamal Jamshidi. “Task scheduling and load balancing in SDN-based cloud computing: A review of relevant research.” Journal of Engineering Research, 2024.
https://doi.org/10.1016/j.jer.2024.11.002.

[21] Y. Hu, G. Bai. “A systematic literature review of cloud computing in eHealth.” (2014). https://arxiv.org/abs/1412.2494

[22] Carla Mouradian, D. Naboulsi, S. Yangui, et al. “A Comprehensive Survey on Fog Computing: State-of-the-art and Research Challenges.” arXiv (2017). https://arxiv.org/abs/1710.11001

[23] A. Chi Zhou, B. He, S. Ibrahim. “A Taxonomy and Survey on eScience as a Service in the Cloud.” (2014). https://arxiv.org/abs/1407.7360

[24] M. Fahmideh, G. Low, G. Beydoun, F. Daneshgar. “Cloud Migration Process: A Survey, Evaluation Framework and Open Challenges.” (2020). https://arxiv.org/abs/2004.10725

[25] Ali, Asif, et al. “Systematic analysis of on-premise and cloud services.” International Journal of Cloud Computing, vol. 13, no. 3, 2024, pp. 214–242.
https://doi.org/10.1504/IJCC.2024.139604.

[26] Mušić, Din, Jernej Hribar, and Carolina Fortuna. “Digital transformation with a lightweight on-premise PaaS.” Future Generation Computer Systems, vol. 160, 2024, pp. 619–629.
https://doi.org/10.1016/j.future.2024.06.026.

[27] Aslanli, Orkhan. “Cloud and on-premises based security solution for industrial IoT.” International Journal of Information Engineering and Electronic Business, vol. 16, no. 5, 2024, pp. 55–62.
https://doi.org/10.1016/j.future.2024.06.026.

[28] von Scherenberg, Franziska, Malte Hellmeier, and Boris Otto. “Data sovereignty in information systems.” Electronic Markets 34.1 (2024): 15. https://doi.org/10.1007/s12525-024-00693-4

[29] Abbas, Antragama Ewa, et al. “Beyond control over data: Conceptualizing data sovereignty from a social contract perspective.” Electronic Markets 34.1 (2024): 20. https://doi.org/10.1007/s12525-024-00695-2

[30] Cordes, Ashley, et al. “Competing interests: digital health and indigenous data sovereignty.” NPJ Digital Medicine 7.1 (2024): 178. https://doi.org/10.1038/s41746-024-01171-z

[31] Belli, Luca, Water B. Gaspar, and Shilpa Singh Jaswant. “Data sovereignty and data transfers as fundamental elements of digital transformation: Lessons from the BRICS countries.” Computer Law & Security Review 54 (2024): 106017. https://doi.org/10.1016/j.clsr.2024.106017

[32] Irekponor, Obehi. “Designing resilient AI architectures for predictive energy finance systems amid data sovereignty, adversarial threats, and policy volatility.” International Journal of Research Publication and Reviews 6.6 (2025): 73-100. https://doi.org/10.55248/gengpi.6.0625.2125

[33] TEWARI, SHISHIR, and ASHITOSH CHITNIS. “AI and multi-cloud compliance: Safeguarding data sovereignty.” (2024). Link

Leave a Comment

Your email address will not be published. Required fields are marked *

en_USEN