Skip to content
IRC-Coding IRC-Coding
Cloud Computing Grundlagen IaaS PaaS SaaS AWS Azure GCP Serverless Funktionen Lambda Functions

Cloud Computing Grundlagen: IaaS, PaaS, SaaS, AWS, Azure, GCP & Serverless-Funktionen

Cloud Computing Grundlagen mit IaaS, PaaS, SaaS-Modellen. AWS, Azure, GCP Vergleich und Serverless-Funktionen mit Lambda, Functions, praktischen Beispielen und Best Practices.

S

schutzgeist

2 min read

Cloud Computing Grundlagen: IaaS, PaaS, SaaS, AWS, Azure, GCP & Serverless-Funktionen

Dieser Beitrag ist eine umfassende Einführung in die Cloud Computing Grundlagen – inklusive IaaS, PaaS, SaaS-Modellen, AWS, Azure, GCP Vergleich und Serverless-Funktionen mit praktischen Beispielen.

In a Nutshell

Cloud Computing bietet on-demand IT-Ressourcen über Internet. IaaS gibt Infrastruktur-Kontrolle, PaaS vereinfacht Entwicklung, SaaS liefert fertige Anwendungen. Serverless eliminiert Server-Management.

Kompakte Fachbeschreibung

Cloud Computing ist die Bereitstellung von IT-Ressourcen (Rechenleistung, Speicher, Datenbanken) über Internet mit pay-as-you-go-Preismodell.

Service-Modelle:

IaaS (Infrastructure as a Service)

  • Konzept: Virtuelle Infrastruktur-Ressourcen
  • Ressourcen: VMs, Storage, Networking
  • Verantwortung: Betriebssysteme, Middleware, Anwendungen
  • Beispiele: AWS EC2, Azure VMs, Google Compute Engine
  • Vorteile: Maximale Flexibilität, volle Kontrolle

PaaS (Platform as a Service)

  • Konzept: Entwicklungs- und Bereitstellungsplattform
  • Ressourcen: Runtime, Datenbanken, Development Tools
  • Verantwortung: Anwendungen, Daten
  • Beispiele: AWS Elastic Beanstalk, Azure App Service, Google App Engine
  • Vorteile: Vereinfachte Entwicklung, automatische Skalierung

SaaS (Software as a Service)

  • Konzept: Fertige Software-Anwendungen
  • Ressourcen: Komplette Anwendung
  • Verantwortung: Nur Nutzung und Konfiguration
  • Beispiele: Office 365, Salesforce, Gmail
  • Vorteile: Keine Installation, sofort nutzbar

Serverless Computing

  • Konzept: Event-basierte Code-Ausführung
  • Ressourcen: Functions, Triggers, APIs
  • Verantwortung: Nur Code und Konfiguration
  • Beispiele: AWS Lambda, Azure Functions, Google Cloud Functions
  • Vorteile: Kein Server-Management, pay-per-use

Prüfungsrelevante Stichpunkte

  • Cloud Computing: On-demand IT-Ressourcen über Internet
  • IaaS: Infrastructure as Service - virtuelle Infrastruktur
  • PaaS: Platform as Service - Entwicklungsplattform
  • SaaS: Software as Service - fertige Anwendungen
  • Serverless: Event-basierte Code-Ausführung ohne Server-Management
  • AWS: Amazon Web Services - führende Cloud-Plattform
  • Azure: Microsoft Cloud - Enterprise-fokussiert
  • GCP: Google Cloud Platform - Daten und KI-fokussiert
  • IHK-relevant: Moderne IT-Infrastruktur und Deployment-Strategien

Kernkomponenten

  1. Service-Modelle: IaaS, PaaS, SaaS, Serverless
  2. Cloud-Provider: AWS, Azure, GCP
  3. Compute Services: VMs, Containers, Functions
  4. Storage Services: Object, Block, File Storage
  5. Network Services: VPC, Load Balancer, CDN
  6. Database Services: SQL, NoSQL, Cache
  7. Security: IAM, Encryption, Compliance
  8. Monitoring: Logging, Metrics, Alerting

Praxisbeispiele

1. AWS Lambda Serverless Funktion mit Python

import json
import boto3
import os
import logging
from datetime import datetime

# Logger konfigurieren
logger = logging.getLogger()
logger.setLevel(logging.INFO)

# AWS Clients
dynamodb = boto3.resource('dynamodb')
s3_client = boto3.client('s3')
sns_client = boto3.client('sns')

# Environment Variables
TABLE_NAME = os.environ.get('TABLE_NAME', 'users')
BUCKET_NAME = os.environ.get('BUCKET_NAME', 'user-uploads')
SNS_TOPIC_ARN = os.environ.get('SNS_TOPIC_ARN')

# DynamoDB Table
table = dynamodb.Table(TABLE_NAME)

def lambda_handler(event, context):
    """
    Lambda Handler für User-Management
    Unterstützt verschiedene Event-Typen:
    - API Gateway: User CRUD Operationen
    - S3: File Upload Verarbeitung
    - DynamoDB Streams: Daten-Änderungen
    """
    
    try:
        # Event-Typ bestimmen
        event_source = event.get('Records', [{}])[0].get('eventSource', '')
        
        if 'aws:apigateway' in event_source or 'httpMethod' in event:
            return handle_api_gateway_event(event)
        elif 'aws:s3' in event_source:
            return handle_s3_event(event)
        elif 'aws:dynamodb' in event_source:
            return handle_dynamodb_event(event)
        else:
            return handle_direct_event(event)
            
    except Exception as e:
        logger.error(f"Error processing event: {str(e)}")
        return create_response(500, {'error': str(e)})

def handle_api_gateway_event(event):
    """API Gateway Events verarbeiten"""
    http_method = event.get('httpMethod', '')
    path = event.get('path', '')
    
    logger.info(f"API Gateway Event: {http_method} {path}")
    
    if http_method == 'GET' and path == '/users':
        return get_all_users()
    elif http_method == 'GET' and path.startswith('/users/'):
        user_id = path.split('/')[-1]
        return get_user(user_id)
    elif http_method == 'POST' and path == '/users':
        return create_user(json.loads(event.get('body', '{}')))
    elif http_method == 'PUT' and path.startswith('/users/'):
        user_id = path.split('/')[-1]
        return update_user(user_id, json.loads(event.get('body', '{}')))
    elif http_method == 'DELETE' and path.startswith('/users/'):
        user_id = path.split('/')[-1]
        return delete_user(user_id)
    else:
        return create_response(404, {'error': 'Endpoint not found'})

def handle_s3_event(event):
    """S3 Events verarbeiten (File Upload)"""
    for record in event.get('Records', []):
        bucket = record['s3']['bucket']['name']
        key = record['s3']['object']['key']
        size = record['s3']['object']['size']
        
        logger.info(f"S3 Event: {bucket}/{key} ({size} bytes)")
        
        try:
            # File-Informationen speichern
            file_info = {
                'bucket': bucket,
                'key': key,
                'size': size,
                'upload_time': datetime.utcnow().isoformat(),
                'processed': False
            }
            
            # Metadaten aus S3 holen
            response = s3_client.head_object(Bucket=bucket, Key=key)
            file_info.update({
                'content_type': response.get('ContentType', ''),
                'metadata': response.get('Metadata', {})
            })
            
            # File-Informationen in DynamoDB speichern
            table.put_item(
                Item={
                    'PK': f'FILE#{key}',
                    'SK': 'METADATA',
                    **file_info
                }
            )
            
            # Benachrichtigung senden
            if SNS_TOPIC_ARN:
                send_notification(f"New file uploaded: {key}", file_info)
            
            logger.info(f"Successfully processed S3 event for {key}")
            
        except Exception as e:
            logger.error(f"Error processing S3 event for {key}: {str(e)}")
    
    return create_response(200, {'message': 'S3 event processed'})

def handle_dynamodb_event(event):
    """DynamoDB Streams verarbeiten"""
    for record in event.get('Records', []):
        event_name = record['eventName']
        
        if event_name == 'INSERT':
            new_image = record['dynamodb']['NewImage']
            handle_user_insert(new_image)
        elif event_name == 'MODIFY':
            old_image = record['dynamodb']['OldImage']
            new_image = record['dynamodb']['NewImage']
            handle_user_update(old_image, new_image)
        elif event_name == 'REMOVE':
            old_image = record['dynamodb']['OldImage']
            handle_user_delete(old_image)
    
    return create_response(200, {'message': 'DynamoDB event processed'})

def get_all_users():
    """Alle Benutzer abrufen"""
    try:
        response = table.scan(
            FilterExpression="begins_with(PK, :pk)",
            ExpressionAttributeValues={':pk': 'USER#'}
        )
        
        users = []
        for item in response.get('Items', []):
            users.append({
                'id': item.get('SK'),
                'username': item.get('username'),
                'email': item.get('email'),
                'created_at': item.get('created_at')
            })
        
        return create_response(200, {'users': users})
        
    except Exception as e:
        logger.error(f"Error getting users: {str(e)}")
        return create_response(500, {'error': str(e)})

def get_user(user_id):
    """Einen Benutzer abrufen"""
    try:
        response = table.get_item(
            Key={
                'PK': f'USER#{user_id}',
                'SK': 'PROFILE'
            }
        )
        
        if 'Item' in response:
            user = response['Item']
            return create_response(200, {'user': user})
        else:
            return create_response(404, {'error': 'User not found'})
            
    except Exception as e:
        logger.error(f"Error getting user {user_id}: {str(e)}")
        return create_response(500, {'error': str(e)})

def create_user(user_data):
    """Neuen Benutzer erstellen"""
    try:
        user_id = str(int(datetime.utcnow().timestamp() * 1000))
        
        user_item = {
            'PK': f'USER#{user_id}',
            'SK': 'PROFILE',
            'user_id': user_id,
            'username': user_data.get('username'),
            'email': user_data.get('email'),
            'created_at': datetime.utcnow().isoformat(),
            'status': 'active'
        }
        
        # Validierung
        if not user_data.get('username') or not user_data.get('email'):
            return create_response(400, {'error': 'Username and email are required'})
        
        # Benutzer speichern
        table.put_item(Item=user_item)
        
        # Benachrichtigung senden
        if SNS_TOPIC_ARN:
            send_notification(f"New user created: {user_data.get('username')}", user_item)
        
        logger.info(f"Created user: {user_id}")
        
        return create_response(201, {'user': user_item})
        
    except Exception as e:
        logger.error(f"Error creating user: {str(e)}")
        return create_response(500, {'error': str(e)})

def update_user(user_id, user_data):
    """Benutzer aktualisieren"""
    try:
        # Prüfen ob Benutzer existiert
        response = table.get_item(
            Key={
                'PK': f'USER#{user_id}',
                'SK': 'PROFILE'
            }
        )
        
        if 'Item' not in response:
            return create_response(404, {'error': 'User not found'})
        
        # Update-Expression erstellen
        update_expression = "SET "
        expression_values = {}
        expression_names = {}
        
        if 'username' in user_data:
            update_expression += "#username = :username, "
            expression_values[':username'] = user_data['username']
            expression_names['#username'] = 'username'
        
        if 'email' in user_data:
            update_expression += "email = :email, "
            expression_values[':email'] = user_data['email']
        
        if 'status' in user_data:
            update_expression += "status = :status, "
            expression_values[':status'] = user_data['status']
        
        update_expression += "updated_at = :updated_at"
        expression_values[':updated_at'] = datetime.utcnow().isoformat()
        
        # Update ausführen
        table.update_item(
            Key={
                'PK': f'USER#{user_id}',
                'SK': 'PROFILE'
            },
            UpdateExpression=update_expression,
            ExpressionAttributeValues=expression_values,
            ExpressionAttributeNames=expression_names if expression_names else None
        )
        
        logger.info(f"Updated user: {user_id}")
        
        return create_response(200, {'message': 'User updated successfully'})
        
    except Exception as e:
        logger.error(f"Error updating user {user_id}: {str(e)}")
        return create_response(500, {'error': str(e)})

def delete_user(user_id):
    """Benutzer löschen"""
    try:
        # Prüfen ob Benutzer existiert
        response = table.get_item(
            Key={
                'PK': f'USER#{user_id}',
                'SK': 'PROFILE'
            }
        )
        
        if 'Item' not in response:
            return create_response(404, {'error': 'User not found'})
        
        # Benutzer löschen
        table.delete_item(
            Key={
                'PK': f'USER#{user_id}',
                'SK': 'PROFILE'
            }
        )
        
        logger.info(f"Deleted user: {user_id}")
        
        return create_response(200, {'message': 'User deleted successfully'})
        
    except Exception as e:
        logger.error(f"Error deleting user {user_id}: {str(e)}")
        return create_response(500, {'error': str(e)})

def handle_user_insert(new_image):
    """User Insert Event verarbeiten"""
    user_id = new_image.get('user_id', {}).get('S')
    username = new_image.get('username', {}).get('S')
    
    logger.info(f"User inserted: {user_id} ({username})")
    
    # Zusätzliche Logik für neue Benutzer
    # z.B. Willkommens-E-Mail senden, Default-Einstellungen erstellen

def handle_user_update(old_image, new_image):
    """User Update Event verarbeiten"""
    user_id = new_image.get('user_id', {}).get('S')
    
    logger.info(f"User updated: {user_id}")
    
    # Änderungen protokollieren
    changes = {}
    for key in new_image:
        if key in old_image and new_image[key] != old_image[key]:
            changes[key] = {
                'old': old_image[key],
                'new': new_image[key]
            }
    
    if changes:
        logger.info(f"User {user_id} changes: {changes}")

def handle_user_delete(old_image):
    """User Delete Event verarbeiten"""
    user_id = old_image.get('user_id', {}).get('S')
    username = old_image.get('username', {}).get('S')
    
    logger.info(f"User deleted: {user_id} ({username})")
    
    # Aufräumarbeiten durchführen
    # z.B. zugehörige Daten löschen

def send_notification(message, data):
    """SNS Benachrichtigung senden"""
    try:
        sns_client.publish(
            TopicArn=SNS_TOPIC_ARN,
            Subject=message,
            Message=json.dumps(data, default=str)
        )
        logger.info(f"Notification sent: {message}")
    except Exception as e:
        logger.error(f"Error sending notification: {str(e)}")

def create_response(status_code, body):
    """HTTP Response erstellen"""
    return {
        'statusCode': status_code,
        'headers': {
            'Content-Type': 'application/json',
            'Access-Control-Allow-Origin': '*',
            'Access-Control-Allow-Headers': 'Content-Type',
            'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, OPTIONS'
        },
        'body': json.dumps(body, default=str)
    }

def handle_direct_event(event):
    """Direkte Events (z.B. Scheduled Events)"""
    event_source = event.get('source', '')
    
    if event_source == 'aws.events':
        # Scheduled Event (z.B. tägliche Bereinigung)
        return handle_scheduled_event(event)
    else:
        return create_response(400, {'error': 'Unknown event type'})

def handle_scheduled_event(event):
    """Scheduled Events verarbeiten"""
    logger.info("Processing scheduled event")
    
    # Tägliche Bereinigungsaufgaben
    # z.B. inaktive Benutzer deaktivieren, alte Logs löschen
    
    return create_response(200, {'message': 'Scheduled event processed'})

2. Azure Functions mit C#

using System;
using System.IO;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Azure.WebJobs;
using Microsoft.Azure.WebJobs.Extensions.Http;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Logging;
using Newtonsoft.Json;
using Azure.Data.Tables;
using Azure.Storage.Blobs;
using Azure.Storage.Queues;
using Azure.Communication.Email;

public class CloudFunctions
{
    private readonly string _connectionString = Environment.GetEnvironmentVariable("AzureWebJobsStorage");
    private readonly string _tableName = Environment.GetEnvironmentVariable("TableName") ?? "users";
    private readonly string _containerName = Environment.GetEnvironmentVariable("ContainerName") ?? "uploads";

    // HTTP Trigger Function
    [FunctionName("ProcessUser")]
    public async Task<IActionResult> ProcessUser(
        [HttpTrigger(AuthorizationLevel.Function, "post", Route = "users")] HttpRequest req,
        ILogger log)
    {
        log.LogInformation("C# HTTP trigger function processed a user request.");

        try
        {
            string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
            dynamic data = JsonConvert.DeserializeObject(requestBody);

            // Validierung
            if (data?.name == null || data?.email == null)
            {
                return new BadRequestObjectResult(new { error = "Name and email are required" });
            }

            // Table Storage Client
            var tableClient = new TableClient(_connectionString, _tableName);
            await tableClient.CreateIfNotExistsAsync();

            // User Entity erstellen
            var user = new TableEntity("User", Guid.NewGuid().ToString())
            {
                { "Name", data.name },
                { "Email", data.email },
                { "CreatedAt", DateTime.UtcNow },
                { "Status", "Active" }
            };

            // In Table Storage speichern
            await tableClient.UpsertEntityAsync(user);

            // Queue Message senden für weitere Verarbeitung
            var queueClient = new QueueClient(_connectionString, "user-processing");
            await queueClient.CreateIfNotExistsAsync();
            await queueClient.SendMessageAsync(JsonConvert.SerializeObject(new { 
                Action = "UserCreated", 
                UserId = user.RowKey,
                Email = data.email 
            }));

            log.LogInformation($"User {user.RowKey} created successfully");

            return new OkObjectResult(new { 
                message = "User created successfully",
                userId = user.RowKey
            });
        }
        catch (Exception ex)
        {
            log.LogError($"Error processing user: {ex.Message}");
            return new StatusCodeResult(500);
        }
    }

    // Blob Trigger Function
    [FunctionName("ProcessImageUpload")]
    public async Task ProcessImageUpload(
        [BlobTrigger("uploads/{name}", Connection = "AzureWebJobsStorage")] Stream myBlob,
        string name,
        ILogger log)
    {
        log.LogInformation($"C# Blob trigger function ProcessImageUpload processed blob\n Name:{name} \n Size: {myBlob.Length} Bytes");

        try
        {
            // Bild verarbeiten (z.B. Thumbnail erstellen)
            if (name.ToLower().EndsWith(".jpg") || name.ToLower().EndsWith(".png"))
            {
                var blobServiceClient = new BlobServiceClient(_connectionString);
                var containerClient = blobServiceClient.GetBlobContainerClient(_containerName);
                
                // Thumbnail erstellen (vereinfacht)
                var thumbnailBlobClient = containerClient.GetBlobClient($"thumbnails/{name}");
                await thumbnailBlobClient.UploadAsync(myBlob, overwrite: true);

                // Metadaten in Table Storage speichern
                var tableClient = new TableClient(_connectionString, "imageMetadata");
                await tableClient.CreateIfNotExistsAsync();

                var metadata = new TableEntity("Image", name)
                {
                    { "OriginalSize", myBlob.Length },
                    { "ProcessedAt", DateTime.UtcNow },
                    { "ThumbnailPath", $"thumbnails/{name}" },
                    { "Status", "Processed" }
                };

                await tableClient.UpsertEntityAsync(metadata);

                log.LogInformation($"Image {name} processed successfully");
            }
        }
        catch (Exception ex)
        {
            log.LogError($"Error processing image {name}: {ex.Message}");
            throw;
        }
    }

    // Queue Trigger Function
    [FunctionName("ProcessUserQueue")]
    public async Task ProcessUserQueue(
        [QueueTrigger("user-processing", Connection = "AzureWebJobsStorage")] string queueMessage,
        ILogger log)
    {
        log.LogInformation($"C# Queue trigger function processed: {queueMessage}");

        try
        {
            dynamic message = JsonConvert.DeserializeObject(queueMessage);
            string action = message.Action;
            string userId = message.UserId;
            string email = message.Email;

            if (action == "UserCreated")
            {
                // Willkommens-E-Mail senden
                await SendWelcomeEmail(email, log);
                
                // User in weiteren Systemen registrieren
                await RegisterUserInExternalSystems(userId, email, log);
            }

            log.LogInformation($"Queue message processed for user {userId}");
        }
        catch (Exception ex)
        {
            log.LogError($"Error processing queue message: {ex.Message}");
            throw;
        }
    }

    // Timer Trigger Function
    [FunctionName("CleanupTask")]
    public async Task CleanupTask(
        [TimerTrigger("0 0 2 * * *")] TimerInfo myTimer,  // Täglich um 2 Uhr nachts
        ILogger log)
    {
        log.LogInformation($"C# Timer trigger function executed at: {DateTime.Now}");

        try
        {
            // Alte temporäre Dateien aufräumen
            await CleanupTempFiles(log);
            
            // Inaktive Benutzer deaktivieren
            await DeactivateInactiveUsers(log);
            
            // Statistiken aktualisieren
            await UpdateStatistics(log);

            log.LogInformation("Cleanup task completed successfully");
        }
        catch (Exception ex)
        {
            log.LogError($"Error in cleanup task: {ex.Message}");
        }
    }

    // Event Grid Trigger Function
    [FunctionName("HandleResourceEvent")]
    public async Task HandleResourceEvent(
        [EventGridTrigger] EventGridEvent eventGridEvent,
        ILogger log)
    {
        log.LogInformation($"C# Event Grid trigger function processed event: {eventGridEvent.EventType}");

        try
        {
            dynamic data = eventGridEvent.Data;
            string resourceType = data.resourceType;
            string resourceName = data.resourceName;

            if (resourceType == "Microsoft.Storage/storageAccounts/blobServices")
            {
                // Blob-Event verarbeiten
                log.LogInformation($"Blob event: {resourceName}");
            }
            else if (resourceType == "Microsoft.Sql/servers/databases")
            {
                // SQL-Event verarbeiten
                log.LogInformation($"SQL event: {resourceName}");
            }

            log.LogInformation($"Event processed: {eventGridEvent.EventType}");
        }
        catch (Exception ex)
        {
            log.LogError($"Error processing event: {ex.Message}");
        }
    }

    // Service Bus Trigger Function
    [FunctionName("ProcessOrder")]
    public async Task ProcessOrder(
        [ServiceBusTrigger("orders", Connection = "ServiceBusConnection")] string orderMessage,
        ILogger log)
    {
        log.LogInformation($"C# Service Bus trigger function processed message: {orderMessage}");

        try
        {
            dynamic order = JsonConvert.DeserializeObject(orderMessage);
            string orderId = order.OrderId;
            string customerId = order.CustomerId;
            decimal amount = order.Amount;

            // Bestellung verarbeiten
            await ProcessOrderInDatabase(orderId, customerId, amount, log);
            
            // Benachrichtigung senden
            await SendOrderConfirmation(customerId, orderId, amount, log);
            
            // Lager aktualisieren
            await UpdateInventory(order, log);

            log.LogInformation($"Order {orderId} processed successfully");
        }
        catch (Exception ex)
        {
            log.LogError($"Error processing order: {ex.Message}");
            throw;
        }
    }

    // Helper Methods
    private async Task SendWelcomeEmail(string email, ILogger log)
    {
        try
        {
            // Azure Communication Services für E-Mail
            var connectionString = Environment.GetEnvironmentVariable("CommunicationServicesConnectionString");
            var emailClient = new EmailClient(connectionString);

            var emailMessage = new EmailMessage(
                "DoNotReply@yourdomain.com",
                email,
                "Welcome to our service!",
                "Thank you for registering. We're excited to have you on board!"
            );

            await emailClient.SendAsync(emailMessage);
            log.LogInformation($"Welcome email sent to {email}");
        }
        catch (Exception ex)
        {
            log.LogError($"Error sending welcome email: {ex.Message}");
        }
    }

    private async Task RegisterUserInExternalSystems(string userId, string email, ILogger log)
    {
        // In CRM-System registrieren
        log.LogInformation($"Registering user {userId} in CRM system");
        
        // In Newsletter-System eintragen
        log.LogInformation($"Adding user {userId} to newsletter system");
        
        // Analytics-Tracking initialisieren
        log.LogInformation($"Initializing analytics for user {userId}");
    }

    private async Task CleanupTempFiles(ILogger log)
    {
        var blobServiceClient = new BlobServiceClient(_connectionString);
        var containerClient = blobServiceClient.GetBlobContainerClient("temp");
        
        await foreach (var blobItem in containerClient.GetBlobsAsync())
        {
            if (blobItem.Properties.LastModified < DateTime.UtcNow.AddDays(-1))
            {
                await containerClient.DeleteBlobAsync(blobItem.Name);
                log.LogInformation($"Deleted temp file: {blobItem.Name}");
            }
        }
    }

    private async Task DeactivateInactiveUsers(ILogger log)
    {
        var tableClient = new TableClient(_connectionString, _tableName);
        
        await foreach (var user in tableClient.QueryAsync<TableEntity>(filter: $"Status eq 'Active'"))
        {
            var lastLogin = user.GetDateTime("LastLogin") ?? user.GetDateTime("CreatedAt");
            
            if (lastLogin < DateTime.UtcNow.AddDays(-90))
            {
                user["Status"] = "Inactive";
                await tableClient.UpdateEntityAsync(user);
                log.LogInformation($"Deactivated inactive user: {user.RowKey}");
            }
        }
    }

    private async Task UpdateStatistics(ILogger log)
    {
        // Statistiken in separater Tabelle aktualisieren
        var statsTable = new TableClient(_connectionString, "statistics");
        await statsTable.CreateIfNotExistsAsync();
        
        var stats = new TableEntity("Daily", DateTime.UtcNow.ToString("yyyy-MM-dd"))
        {
            { "ActiveUsers", await GetActiveUserCount() },
            { "TotalUsers", await GetTotalUserCount() },
            { "ProcessedImages", await GetProcessedImageCount() },
            { "UpdatedAt", DateTime.UtcNow }
        };
        
        await statsTable.UpsertEntityAsync(stats);
        log.LogInformation("Daily statistics updated");
    }

    private async Task<int> GetActiveUserCount()
    {
        // Implementierung für aktive Benutzer-Zählung
        return 0;
    }

    private async Task<int> GetTotalUserCount()
    {
        // Implementierung für Gesamt-Benutzer-Zählung
        return 0;
    }

    private async Task<int> GetProcessedImageCount()
    {
        // Implementierung für verarbeitete Bilder-Zählung
        return 0;
    }

    private async Task ProcessOrderInDatabase(string orderId, string customerId, decimal amount, ILogger log)
    {
        // Bestellung in Datenbank speichern
        log.LogInformation($"Processing order {orderId} for customer {customerId}");
    }

    private async Task SendOrderConfirmation(string customerId, string orderId, decimal amount, ILogger log)
    {
        // Bestellbestätigung senden
        log.LogInformation($"Sending order confirmation for {orderId}");
    }

    private async Task UpdateInventory(dynamic order, ILogger log)
    {
        // Lagerbestand aktualisieren
        log.LogInformation($"Updating inventory for order {order.OrderId}");
    }
}

// Table Entity für Benutzer
public class UserEntity : ITableEntity
{
    public string PartitionKey { get; set; }
    public string RowKey { get; set; }
    public DateTimeOffset? Timestamp { get; set; }
    public ETag ETag { get; set; }

    public string Name { get; set; }
    public string Email { get; set; }
    public DateTime CreatedAt { get; set; }
    public string Status { get; set; }
    public DateTime? LastLogin { get; set; }
}

3. Google Cloud Functions mit JavaScript

const { Storage } = require('@google-cloud/storage');
const { PubSub } = require('@google-cloud/pubsub');
const { Datastore } = require('@google-cloud/datastore');
const { Firestore } = require('@google-cloud/firestore');
const { Logging } = require('@google-cloud/logging');

// Cloud Clients initialisieren
const storage = new Storage();
const pubsub = new PubSub();
const datastore = new Datastore();
const firestore = new Firestore();
const logging = new Logging();

// Logger
const logger = logging.log('cloud-functions');

/**
 * HTTP Trigger Function - User Management
 * @param {Object} req Express request object.
 * @param {Object} res Express response object.
 */
exports.userManagement = async (req, res) => {
    const method = req.method;
    const path = req.path;
    
    logger.info(`HTTP Request: ${method} ${path}`);
    
    try {
        if (method === 'GET' && path === '/users') {
            return await getAllUsers(req, res);
        } else if (method === 'GET' && path.startsWith('/users/')) {
            const userId = path.split('/')[2];
            return await getUser(userId, req, res);
        } else if (method === 'POST' && path === '/users') {
            return await createUser(req, res);
        } else if (method === 'PUT' && path.startsWith('/users/')) {
            const userId = path.split('/')[2];
            return await updateUser(userId, req, res);
        } else if (method === 'DELETE' && path.startsWith('/users/')) {
            const userId = path.split('/')[2];
            return await deleteUser(userId, req, res);
        } else {
            return res.status(404).json({ error: 'Endpoint not found' });
        }
    } catch (error) {
        logger.error(`Error in userManagement: ${error.message}`);
        return res.status(500).json({ error: error.message });
    }
};

/**
 * Cloud Storage Trigger - File Upload Processing
 * @param {Object} file The storage object.
 * @param {Object} context The event metadata.
 */
exports.processFileUpload = async (file, context) => {
    const bucketName = file.bucket;
    const fileName = file.name;
    const fileSize = file.size;
    const contentType = file.contentType;
    
    logger.info(`Processing file upload: ${bucketName}/${fileName} (${fileSize} bytes)`);
    
    try {
        // Metadaten extrahieren
        const [metadata] = await storage.bucket(bucketName).file(fileName).getMetadata();
        
        // File-Informationen in Firestore speichern
        const fileDoc = {
            bucketName,
            fileName,
            fileSize,
            contentType,
            uploadedAt: new Date(),
            metadata: metadata.metadata || {},
            processed: false
        };
        
        await firestore.collection('files').doc(fileName).set(fileDoc);
        
        // Bildverarbeitung bei Bildern
        if (contentType && contentType.startsWith('image/')) {
            await processImage(bucketName, fileName);
        }
        
        // Pub/Sub Nachricht für weitere Verarbeitung
        const dataBuffer = Buffer.from(JSON.stringify({
            action: 'file_uploaded',
            fileName,
            bucketName,
            fileSize
        }));
        
        await pubsub.topic('file-processing').publish(dataBuffer);
        
        logger.info(`File ${fileName} processed successfully`);
        
    } catch (error) {
        logger.error(`Error processing file ${fileName}: ${error.message}`);
        throw error;
    }
};

/**
 * Pub/Sub Trigger - Asynchronous Processing
 * @param {Object} message The Pub/Sub message object.
 * @param {Object} context The event metadata.
 */
exports.processAsyncTask = async (message, context) => {
    const data = JSON.parse(Buffer.from(message.data, 'base64').toString());
    const action = data.action;
    
    logger.info(`Processing async task: ${action}`);
    
    try {
        switch (action) {
            case 'file_uploaded':
                await handleFileUploaded(data);
                break;
            case 'user_created':
                await handleUserCreated(data);
                break;
            case 'order_placed':
                await handleOrderPlaced(data);
                break;
            default:
                logger.warn(`Unknown action: ${action}`);
        }
        
        logger.info(`Async task ${action} completed successfully`);
        
    } catch (error) {
        logger.error(`Error processing async task ${action}: ${error.message}`);
        throw error;
    }
};

/**
 * Firestore Trigger - Database Changes
 * @param {Object} change The Firestore change object.
 * @param {Object} context The event metadata.
 */
exports.handleUserChange = async (change, context) => {
    const userId = context.params.userId;
    const beforeData = change.before.data;
    const afterData = change.after.data;
    
    logger.info(`User change detected: ${userId}`);
    
    try {
        if (!beforeData && afterData) {
            // User created
            logger.info(`User created: ${userId}`);
            await onUserCreated(userId, afterData);
        } else if (beforeData && afterData) {
            // User updated
            logger.info(`User updated: ${userId}`);
            await onUserUpdated(userId, beforeData, afterData);
        } else if (beforeData && !afterData) {
            // User deleted
            logger.info(`User deleted: ${userId}`);
            await onUserDeleted(userId, beforeData);
        }
        
    } catch (error) {
        logger.error(`Error handling user change for ${userId}: ${error.message}`);
        throw error;
    }
};

/**
 * Scheduled Function - Daily Tasks
 * @param {Object} event The Cloud Scheduler event.
 * @param {Object} context The event metadata.
 */
exports.dailyMaintenance = async (event, context) => {
    logger.info('Running daily maintenance tasks');
    
    try {
        // Inaktive Benutzer deaktivieren
        await deactivateInactiveUsers();
        
        // Alte Dateien aufräumen
        await cleanupOldFiles();
        
        // Statistiken aktualisieren
        await updateDailyStatistics();
        
        // Backups verifizieren
        await verifyBackups();
        
        logger.info('Daily maintenance completed successfully');
        
    } catch (error) {
        logger.error(`Error in daily maintenance: ${error.message}`);
        throw error;
    }
};

// Helper Functions

async function getAllUsers(req, res) {
    try {
        const usersSnapshot = await firestore.collection('users').get();
        const users = [];
        
        usersSnapshot.forEach(doc => {
            users.push({
                id: doc.id,
                ...doc.data()
            });
        });
        
        return res.status(200).json({ users });
    } catch (error) {
        logger.error(`Error getting all users: ${error.message}`);
        return res.status(500).json({ error: error.message });
    }
}

async function getUser(userId, req, res) {
    try {
        const userDoc = await firestore.collection('users').doc(userId).get();
        
        if (!userDoc.exists) {
            return res.status(404).json({ error: 'User not found' });
        }
        
        return res.status(200).json({ 
            user: {
                id: userDoc.id,
                ...userDoc.data()
            }
        });
    } catch (error) {
        logger.error(`Error getting user ${userId}: ${error.message}`);
        return res.status(500).json({ error: error.message });
    }
}

async function createUser(req, res) {
    try {
        const { name, email, username } = req.body;
        
        // Validierung
        if (!name || !email || !username) {
            return res.status(400).json({ 
                error: 'Name, email, and username are required' 
            });
        }
        
        // Prüfen ob Benutzer bereits existiert
        const existingUser = await firestore.collection('users')
            .where('email', '==', email)
            .get();
        
        if (!existingUser.empty) {
            return res.status(409).json({ error: 'User with this email already exists' });
        }
        
        // Neuen Benutzer erstellen
        const userDoc = {
            name,
            email,
            username,
            createdAt: new Date(),
            status: 'active',
            lastLogin: null
        };
        
        const userRef = await firestore.collection('users').add(userDoc);
        
        // Pub/Sub Nachricht senden
        const dataBuffer = Buffer.from(JSON.stringify({
            action: 'user_created',
            userId: userRef.id,
            email,
            name
        }));
        
        await pubsub.topic('user-events').publish(dataBuffer);
        
        logger.info(`User created: ${userRef.id}`);
        
        return res.status(201).json({ 
            message: 'User created successfully',
            userId: userRef.id
        });
    } catch (error) {
        logger.error(`Error creating user: ${error.message}`);
        return res.status(500).json({ error: error.message });
    }
}

async function updateUser(userId, req, res) {
    try {
        const userDoc = await firestore.collection('users').doc(userId).get();
        
        if (!userDoc.exists) {
            return res.status(404).json({ error: 'User not found' });
        }
        
        const updates = req.body;
        updates.updatedAt = new Date();
        
        await firestore.collection('users').doc(userId).update(updates);
        
        logger.info(`User updated: ${userId}`);
        
        return res.status(200).json({ 
            message: 'User updated successfully' 
        });
    } catch (error) {
        logger.error(`Error updating user ${userId}: ${error.message}`);
        return res.status(500).json({ error: error.message });
    }
}

async function deleteUser(userId, req, res) {
    try {
        const userDoc = await firestore.collection('users').doc(userId).get();
        
        if (!userDoc.exists) {
            return res.status(404).json({ error: 'User not found' });
        }
        
        await firestore.collection('users').doc(userId).delete();
        
        logger.info(`User deleted: ${userId}`);
        
        return res.status(200).json({ 
            message: 'User deleted successfully' 
        });
    } catch (error) {
        logger.error(`Error deleting user ${userId}: ${error.message}`);
        return res.status(500).json({ error: error.message });
    }
}

async function processImage(bucketName, fileName) {
    try {
        // Thumbnail erstellen (vereinfacht)
        const thumbnailName = `thumbnails/${fileName}`;
        
        // In der Praxis würden Sie hier Bildverarbeitungsbibliotheken verwenden
        // z.B. sharp für Bildverarbeitung
        
        await storage.bucket(bucketName).file(fileName).copy(
            storage.bucket(bucketName).file(thumbnailName)
        );
        
        // Metadaten aktualisieren
        await firestore.collection('files').doc(fileName).update({
            thumbnailPath: thumbnailName,
            processed: true,
            processedAt: new Date()
        });
        
        logger.info(`Image processed: ${fileName}`);
        
    } catch (error) {
        logger.error(`Error processing image ${fileName}: ${error.message}`);
        throw error;
    }
}

async function handleFileUploaded(data) {
    const { fileName, bucketName } = data;
    
    // Zusätzliche Verarbeitung
    logger.info(`Handling file uploaded event for: ${fileName}`);
    
    // Analytics-Tracking
    await firestore.collection('analytics').add({
        event: 'file_uploaded',
        fileName,
        bucketName,
        timestamp: new Date()
    });
}

async function handleUserCreated(data) {
    const { userId, email, name } = data;
    
    // Willkommens-E-Mail senden
    logger.info(`Sending welcome email to ${email}`);
    
    // Benutzer in weiteren Systemen registrieren
    logger.info(`Registering user ${userId} in external systems`);
    
    // Analytics-Event
    await firestore.collection('analytics').add({
        event: 'user_created',
        userId,
        email,
        name,
        timestamp: new Date()
    });
}

async function handleOrderPlaced(data) {
    const { orderId, customerId, amount } = data;
    
    logger.info(`Processing order ${orderId} for customer ${customerId}`);
    
    // Bestellverarbeitung
    await firestore.collection('orders').doc(orderId).set({
        orderId,
        customerId,
        amount,
        status: 'processing',
        createdAt: new Date()
    });
}

async function onUserCreated(userId, userData) {
    logger.info(`User created event: ${userId}`);
    
    // Zusätzliche Logik für neue Benutzer
    await firestore.collection('user_activity').add({
        userId,
        action: 'created',
        timestamp: new Date()
    });
}

async function onUserUpdated(userId, beforeData, afterData) {
    logger.info(`User updated event: ${userId}`);
    
    // Änderungen protokollieren
    const changes = {};
    
    Object.keys(afterData).forEach(key => {
        if (beforeData[key] !== afterData[key]) {
            changes[key] = {
                before: beforeData[key],
                after: afterData[key]
            };
        }
    });
    
    if (Object.keys(changes).length > 0) {
        await firestore.collection('user_activity').add({
            userId,
            action: 'updated',
            changes,
            timestamp: new Date()
        });
    }
}

async function onUserDeleted(userId, userData) {
    logger.info(`User deleted event: ${userId}`);
    
    // Aufräumarbeiten
    await firestore.collection('user_activity').add({
        userId,
        action: 'deleted',
        timestamp: new Date()
    });
}

async function deactivateInactiveUsers() {
    const cutoffDate = new Date();
    cutoffDate.setDate(cutoffDate.getDate() - 90);
    
    const inactiveUsers = await firestore.collection('users')
        .where('lastLogin', '<', cutoffDate)
        .where('status', '==', 'active')
        .get();
    
    const batch = firestore.batch();
    
    inactiveUsers.forEach(doc => {
        const userRef = firestore.collection('users').doc(doc.id);
        batch.update(userRef, { status: 'inactive', deactivatedAt: new Date() });
    });
    
    await batch.commit();
    
    logger.info(`Deactivated ${inactiveUsers.size} inactive users`);
}

async function cleanupOldFiles() {
    const cutoffDate = new Date();
    cutoffDate.setDate(cutoffDate.getDate() - 30);
    
    const oldFiles = await firestore.collection('files')
        .where('uploadedAt', '<', cutoffDate)
        .where('processed', '==', true)
        .get();
    
    for (const doc of oldFiles) {
        const { bucketName, fileName } = doc.data();
        
        try {
            await storage.bucket(bucketName).file(fileName).delete();
            await firestore.collection('files').doc(doc.id).delete();
        } catch (error) {
            logger.error(`Error deleting old file ${fileName}: ${error.message}`);
        }
    }
    
    logger.info(`Cleaned up ${oldFiles.size} old files`);
}

async function updateDailyStatistics() {
    const today = new Date().toISOString().split('T')[0];
    
    const activeUsers = await firestore.collection('users')
        .where('status', '==', 'active')
        .get();
    
    const totalUsers = await firestore.collection('users').get();
    
    const processedFiles = await firestore.collection('files')
        .where('processed', '==', true)
        .get();
    
    const stats = {
        date: today,
        activeUsers: activeUsers.size,
        totalUsers: totalUsers.size,
        processedFiles: processedFiles.size,
        updatedAt: new Date()
    };
    
    await firestore.collection('daily_stats').doc(today).set(stats);
    
    logger.info(`Daily statistics updated for ${today}`);
}

async function verifyBackups() {
    // Backup-Verifikation implementieren
    logger.info('Verifying backups');
    
    // Prüfen ob tägliche Backups vorhanden sind
    // Backup-Integrität überprüfen
    // Benachrichtigung bei Problemen
}

4. Terraform Infrastructure as Code

# Provider Konfiguration
terraform {
  required_version = ">= 1.0"
  
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
    
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~> 3.0"
    }
    
    google = {
      source  = "hashicorp/google"
      version = "~> 4.0"
    }
  }
  
  backend "s3" {
    bucket = "terraform-state-bucket"
    key    = "cloud-infrastructure/terraform.tfstate"
    region = "eu-west-1"
  }
}

# AWS Provider
provider "aws" {
  region = var.aws_region
  
  default_tags {
    tags = {
      Environment = var.environment
      Project     = var.project_name
      ManagedBy   = "Terraform"
    }
  }
}

# Azure Provider
provider "azurerm" {
  features {}
  
  subscription_id = var.azure_subscription_id
  tenant_id       = var.azure_tenant_id
}

# Google Provider
provider "google" {
  project = var.gcp_project_id
  region  = var.gcp_region
}

# Variablen
variable "project_name" {
  description = "Name of the project"
  type        = string
  default     = "cloud-app"
}

variable "environment" {
  description = "Environment (dev, staging, prod)"
  type        = string
  default     = "dev"
}

variable "aws_region" {
  description = "AWS region"
  type        = string
  default     = "eu-west-1"
}

variable "azure_subscription_id" {
  description = "Azure subscription ID"
  type        = string
  sensitive   = true
}

variable "azure_tenant_id" {
  description = "Azure tenant ID"
  type        = string
  sensitive   = true
}

variable "gcp_project_id" {
  description = "Google Cloud project ID"
  type        = string
  sensitive   = true
}

variable "gcp_region" {
  description = "Google Cloud region"
  type        = string
  default     = "europe-west1"
}

# AWS Resources

# S3 Bucket für statische Inhalte
resource "aws_s3_bucket" "static_content" {
  bucket = "${var.project_name}-${var.environment}-static-content"
  
  tags = {
    Purpose = "Static Content Storage"
  }
}

resource "aws_s3_bucket_versioning" "static_content" {
  bucket = aws_s3_bucket.static_content.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_public_access_block" "static_content" {
  bucket = aws_s3_bucket.static_content.id
  
  block_public_acls       = true
  block_public_policy     = true
  ignore_public_acls      = true
  restrict_public_buckets = true
}

# Lambda Function
resource "aws_iam_role" "lambda_role" {
  name = "${var.project_name}-${var.environment}-lambda-role"
  
  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = "sts:AssumeRole"
        Effect = "Allow"
        Principal = {
          Service = "lambda.amazonaws.com"
        }
      }
    ]
  })
}

resource "aws_iam_role_policy_attachment" "lambda_basic" {
  role       = aws_iam_role.lambda_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}

resource "aws_iam_role_policy" "lambda_dynamodb" {
  name = "${var.project_name}-${var.environment}-lambda-dynamodb"
  role = aws_iam_role.lambda_role.id
  
  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Effect = "Allow"
        Action = [
          "dynamodb:GetItem",
          "dynamodb:PutItem",
          "dynamodb:UpdateItem",
          "dynamodb:DeleteItem",
          "dynamodb:Query",
          "dynamodb:Scan"
        ]
        Resource = "arn:aws:dynamodb:${var.aws_region}:*:table/${var.project_name}-${var.environment}-*"
      }
    ]
  })
}

resource "aws_lambda_function" "api_handler" {
  filename         = "lambda.zip"
  function_name    = "${var.project_name}-${var.environment}-api-handler"
  role            = aws_iam_role.lambda_role.arn
  handler         = "index.handler"
  runtime         = "python3.9"
  timeout         = 30
  
  source_code_hash = filebase64sha256("lambda.zip")
  
  environment {
    variables = {
      ENVIRONMENT = var.environment
      PROJECT    = var.project_name
    }
  }
  
  tags = {
    Purpose = "API Handler"
  }
}

# API Gateway
resource "aws_api_gateway_rest_api" "api" {
  name        = "${var.project_name}-${var.environment}-api"
  description = "API Gateway for ${var.project_name}"
}

resource "aws_api_gateway_resource" "users" {
  rest_api_id = aws_api_gateway_rest_api.api.id
  parent_id   = aws_api_gateway_rest_api.api.root_resource_id
  path_part   = "users"
}

resource "aws_api_gateway_method" "users_get" {
  rest_api_id   = aws_api_gateway_rest_api.api.id
  resource_id   = aws_api_gateway_resource.users.id
  http_method   = "GET"
  authorization = "NONE"
}

resource "aws_api_gateway_integration" "users_get" {
  rest_api_id = aws_api_gateway_rest_api.api.id
  resource_id = aws_api_gateway_resource.users.id
  http_method = aws_api_gateway_method.users_get.http_method
  
  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = aws_lambda_function.api_handler.invoke_arn
}

resource "aws_api_gateway_method" "users_post" {
  rest_api_id   = aws_api_gateway_rest_api.api.id
  resource_id   = aws_api_gateway_resource.users.id
  http_method   = "POST"
  authorization = "NONE"
}

resource "aws_api_gateway_integration" "users_post" {
  rest_api_id = aws_api_gateway_rest_api.api.id
  resource_id = aws_api_gateway_resource.users.id
  http_method = aws_api_gateway_method.users_post.http_method
  
  integration_http_method = "POST"
  type                    = "AWS_PROXY"
  uri                     = aws_lambda_function.api_handler.invoke_arn
}

resource "aws_api_gateway_deployment" "api" {
  rest_api_id = aws_api_gateway_rest_api.api.id
  
  triggers = {
    redeployment = sha1(jsonencode([
      aws_api_gateway_resource.users.id,
      aws_api_gateway_method.users_get.id,
      aws_api_gateway_method.users_post.id,
      aws_api_gateway_integration.users_get.id,
      aws_api_gateway_integration.users_post.id
    ]))
  }
  
  lifecycle {
    create_before_destroy = true
  }
}

resource "aws_api_gateway_stage" "api" {
  deployment_id = aws_api_gateway_deployment.api.id
  rest_api_id   = aws_api_gateway_rest_api.api.id
  stage_name    = var.environment
  
  tags = {
    Environment = var.environment
  }
}

# Lambda Permission für API Gateway
resource "aws_lambda_permission" "api_gateway" {
  statement_id  = "AllowExecutionFromAPIGateway"
  action        = "lambda:InvokeFunction"
  function_name = aws_lambda_function.api_handler.function_name
  principal     = "apigateway.amazonaws.com"
  
  source_arn = "${aws_api_gateway_rest_api.api.execution_arn}/*/${aws_api_gateway_method.users_get.http_method}${aws_api_gateway_resource.users.path}"
}

# DynamoDB Table
resource "aws_dynamodb_table" "users" {
  name           = "${var.project_name}-${var.environment}-users"
  billing_mode   = "PAY_PER_REQUEST"
  hash_key       = "PK"
  range_key      = "SK"
  
  attribute {
    name = "PK"
    type = "S"
  }
  
  attribute {
    name = "SK"
    type = "S"
  }
  
  point_in_time_recovery {
    enabled = true
  }
  
  tags = {
    Purpose = "User Data"
  }
}

# Azure Resources

# Resource Group
resource "azurerm_resource_group" "main" {
  name     = "${var.project_name}-${var.environment}-rg"
  location = var.azure_location
  
  tags = {
    Environment = var.environment
    Project     = var.project_name
  }
}

# Storage Account
resource "azurerm_storage_account" "main" {
  name                     = "${var.project_name}${var.environment}storage"
  resource_group_name      = azurerm_resource_group.main.name
  location                 = azurerm_resource_group.main.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
  
  tags = {
    Environment = var.environment
    Project     = var.project_name
  }
}

# Function App
resource "azurerm_function_app" "main" {
  name                       = "${var.project_name}-${var.environment}-functions"
  location                   = azurerm_resource_group.main.location
  resource_group_name        = azurerm_resource_group.main.name
  app_service_plan_id        = azurerm_service_plan.main.id
  storage_account_name       = azurerm_storage_account.main.name
  storage_account_access_key = azurerm_storage_account.main.primary_access_key
  os_type                    = "linux"
  version                    = "~4"
  
  app_settings = {
    "FUNCTIONS_WORKER_RUNTIME" = "node"
    "WEBSITE_NODE_DEFAULT_VERSION" = "14"
    "Environment" = var.environment
    "Project" = var.project_name
  }
  
  tags = {
    Environment = var.environment
    Project     = var.project_name
  }
}

# App Service Plan
resource "azurerm_service_plan" "main" {
  name                = "${var.project_name}-${var.environment}-asp"
  location            = azurerm_resource_group.main.location
  resource_group_name = azurerm_resource_group.main.name
  os_type             = "Linux"
  sku_name            = "B1"
  
  tags = {
    Environment = var.environment
    Project     = var.project_name
  }
}

# Google Cloud Resources

# Cloud Storage Bucket
resource "google_storage_bucket" "static_content" {
  name          = "${var.project_name}-${var.environment}-static"
  project       = var.gcp_project_id
  location      = var.gcp_region
  storage_class = "STANDARD"
  
  uniform_bucket_level_access = true
  
  labels = {
    environment = var.environment
    project     = var.project_name
  }
}

# Cloud Function
resource "google_cloudfunctions_function" "api_handler" {
  name        = "${var.project_name}-${var.environment}-api-handler"
  description = "API Handler Function"
  runtime     = "python39"
  
  available_memory_mb = 256
  timeout           = 60
  
  source_archive_bucket = google_storage_bucket.function_code.name
  source_archive_object = google_storage_bucket_object.function_code.name
  
  trigger_http = true
  
  entry_point = "handler"
  
  environment_variables = {
    ENVIRONMENT = var.environment
    PROJECT    = var.project_name
  }
  
  labels = {
    environment = var.environment
    project     = var.project_name
  }
}

# Bucket für Function Code
resource "google_storage_bucket" "function_code" {
  name          = "${var.project_name}-${var.environment}-functions"
  project       = var.gcp_project_id
  location      = var.gcp_region
  storage_class = "STANDARD"
  
  uniform_bucket_level_access = true
}

# Function Code Upload
resource "google_storage_bucket_object" "function_code" {
  name   = "function-code.zip"
  bucket = google_storage_bucket.function_code.name
  source = "function-code.zip"
}

# Firestore Database
resource "google_firestore_database" "main" {
  name     = "${var.project_name}-${var.environment}"
  location = var.gcp_region
  type     = "FIRESTORE_NATIVE"
  
  labels = {
    environment = var.environment
    project     = var.project_name
  }
}

# Outputs
output "aws_api_url" {
  description = "AWS API Gateway URL"
  value       = "${aws_api_gateway_deployment.api.invoke_url}/${var.environment}"
}

output "azure_function_url" {
  description = "Azure Function App URL"
  value       = "https://${azurerm_function_app.main.default_hostname}/api"
}

output "gcp_function_url" {
  description = "Google Cloud Function URL"
  value       = google_cloudfunctions_function.api_handler.https_trigger_url
}

output "aws_s3_bucket" {
  description = "AWS S3 Bucket name"
  value       = aws_s3_bucket.static_content.id
}

output "azure_storage_account" {
  description = "Azure Storage Account name"
  value       = azurerm_storage_account.main.name
}

output "gcp_storage_bucket" {
  description = "Google Cloud Storage Bucket name"
  value       = google_storage_bucket.static_content.name
}

Cloud Service Modelle Vergleich

MerkmalIaaSPaaSSaaSServerless
KontrolleMaximalMittelMinimalMinimal
FlexibilitätHochMittelNiedrigHoch
ManagementVollTeilweiseKeinMinimal
SkalierungManuellAutoAutoAuto
KostenPay-per-usePay-per-useSubscriptionPay-per-use
KomplexitätHochMittelNiedrigMittel

Cloud Provider Vergleich

AWS vs Azure vs GCP

KriteriumAWSAzureGCP
Marktanteil~32%~23%~11%
Services200+100+90+
PricingKomplexTransparentGünstig
DocumentationUmfassendGutGut
CommunityGrößteEnterpriseWachsend
KI/MLSageMakerAzure MLVertex AI

Compute Services Vergleich

ServiceAWSAzureGCP
VMsEC2Virtual MachinesCompute Engine
ContainersECS/EKSAKSGKE
ServerlessLambdaFunctionsCloud Functions
BatchBatchBatchCloud Batch

Serverless Architecture Patterns

Event-Driven Architecture

graph TD
    A[Client] --> B[API Gateway]
    B --> C[Lambda Function]
    C --> D[Database]
    C --> E[Queue]
    E --> F[Processor Function]
    F --> G[Storage]
    F --> H[Notification]

Microservices with Serverless

graph LR
    A[User Service] --> B[API Gateway]
    C[Order Service] --> B
    D[Product Service] --> B
    B --> E[Frontend]
    
    A --> F[Event Bus]
    C --> F
    D --> F
    
    F --> G[Analytics]
    F --> H[Notifications]

Pricing Models

Pay-per-Use Components

  • Compute: CPU-Gebrauch pro Sekunde/Millisekunde
  • Memory: Memory-Gebrauch pro GB-Sekunde
  • Storage: GB pro Monat
  • Network: Datenübertragung pro GB
  • Requests: Anzahl der Aufrufe
  • Duration: Ausführungszeit

Cost Optimization Strategies

# Cost Monitoring Example
def monitor_costs():
    # AWS Cost Explorer
    # Azure Cost Management
    # Google Cloud Cost Management
    
    # Optimierung:
    # 1. Rightsizing von Ressourcen
    # 2. Scheduled Scaling
    # 3. Reserved Instances
    # 4. Spot Instances
    # 5. Auto-Scaling

Security Best Practices

Cloud Security Layers

  1. Identity & Access Management (IAM)
  2. Network Security (VPC, Firewalls)
  3. Data Encryption (At Rest, In Transit)
  4. Compliance (GDPR, HIPAA, SOC2)
  5. Monitoring (CloudTrail, Audit Logs)

Security Configuration

# Example Security Policies
security_policies:
  iam:
    - principle_of_least_privilege
    - mfa_required
    - regular_access_review
  
  network:
    - private_subnets
    - security_groups
    - ddos_protection
  
  data:
    - encryption_at_rest
    - encryption_in_transit
    - key_rotation

Multi-Cloud Strategies

Hybrid Cloud Architecture

graph TB
    subgraph "On-Premise"
        A[Legacy Systems]
        B[Database]
    end
    
    subgraph "Private Cloud"
        C[VMs]
        D[Storage]
    end
    
    subgraph "Public Cloud"
        E[Serverless]
        F[Containers]
        G[AI/ML]
    end
    
    A --> E
    B --> F
    C --> G
    D --> E

Multi-Cloud Benefits

  • Vendor Lock-in Avoidance
  • Best-of-Breed Services
  • Cost Optimization
  • Geographic Distribution
  • Disaster Recovery

Migration Strategies

Cloud Migration Approaches

  1. Rehost (Lift and Shift)
  2. Replatform (Lift and Reshape)
  3. Repurchase (Drop and Shop)
  4. Refactor (Re-architect)
  5. Retire (Decommission)
  6. Retain (Keep On-Premise)

Migration Planning

# Migration Assessment Tool
def assess_migration_readiness():
    factors = {
        'application_complexity': 'high',
        'data_sensitivity': 'medium',
        'compliance_requirements': 'high',
        'team_skills': 'medium',
        'budget_constraints': 'medium'
    }
    
    # Recommendation Engine
    if factors['application_complexity'] == 'low':
        return 'rehost'
    elif factors['team_skills'] == 'high':
        return 'refactor'
    else:
        return 'replatform'

Monitoring & Observability

Cloud Monitoring Stack

graph LR
    A[Applications] --> B[Logs]
    A --> C[Metrics]
    A --> D[Traces]
    
    B --> E[Log Analytics]
    C --> F[Metrics Dashboard]
    D --> G[APM Tools]
    
    E --> H[Alerting]
    F --> H
    G --> H

Key Metrics

  • Performance: Response Time, Throughput
  • Availability: Uptime, Error Rate
  • Cost: Resource Utilization, Spend
  • Security: Failed Logins, Threats
  • Business: User Engagement, Revenue

Vorteile und Nachteile

Vorteile von Cloud Computing

  • Scalability: On-demand Ressourcen
  • Flexibility: Pay-per-use Modell
  • Global Reach: Weltweite Verfügbarkeit
  • Innovation: Schneller Zugang zu neuen Technologien
  • Cost Efficiency: Keine upfront Investitionen

Nachteile

  • Vendor Lock-in: Abhängigkeit von Providern
  • Security Concerns: Daten in der Cloud
  • Compliance: Regulatorische Anforderungen
  • Complexity: Multi-Cloud Management
  • Cost Overruns: Unkontrollierte Ausgaben

Häufige Prüfungsfragen

  1. Was ist der Unterschied zwischen IaaS, PaaS und SaaS? IaaS bietet Infrastruktur, PaaS bietet Plattform, SaaS bietet fertige Software - mit abnehmender Kontrolle und Management-Aufwand.

  2. Erklären Sie Serverless Computing! Serverless bedeutet Code-Ausführung ohne Server-Management, pay-per-use, automatische Skalierung und event-basierte Trigger.

  3. Wann verwendet man Multi-Cloud Strategien? Um Vendor Lock-in zu vermeiden, Best-of-Breed Services zu nutzen, Kosten zu optimieren und geografische Verteilung zu erreichen.

  4. Was sind die wichtigsten Sicherheitsaspekte in der Cloud? IAM, Netzwerk-Sicherheit, Daten-Verschlüsselung, Compliance und Monitoring sind die wichtigsten Sicherheitsaspekte.

Wichtigste Quellen

  1. https://aws.amazon.com/
  2. https://azure.microsoft.com/
  3. https://cloud.google.com/
  4. https://www.terraform.io/
Zurück zum Blog
Share: