Category: Information Technology

  • Why AWS Bottlerocket is a Must-Have for Containerized Workloads

    Why AWS Bottlerocket is a Must-Have for Containerized Workloads

    Containers have completely transformed how I approach building, deploying, and managing applications. Their lightweight nature and ability to encapsulate dependencies have made them the foundation of my modern development workflows. When I discovered AWS Bottlerocket, a Linux-based operating system (OS) from Amazon Web Services, it felt like the perfect match for optimizing and securing containerized environments. Let me share my experience with what it is, its capabilities, and why I think it’s worth considering.

    What is AWS Bottlerocket?

    AWS Bottlerocket is an open-source, minimalist OS tailored specifically for running containers. Being open-source means that Bottlerocket benefits from community-driven contributions, which ensures regular updates and innovation. This aspect also allows businesses and developers to customize the OS to meet their specific needs, offering unparalleled flexibility. Open-source adoption fosters a sense of transparency and trust, making it easier for organizations to audit the code and adapt it for their unique use cases, which is especially valuable in sensitive or highly regulated environments. Unlike traditional operating systems that come with a variety of software packages, Bottlerocket is stripped down to include only what is necessary for container orchestration. This design reduces the attack surface and simplifies management.

    Key Capabilities

    Container-First Architecture Bottlerocket is designed from the ground up to run containers efficiently. Its architecture eliminates the overhead of traditional OS features that are unnecessary for containerized workloads. By focusing solely on container support, Bottlerocket ensures better performance and compatibility with orchestration tools like Kubernetes and Amazon ECS. This container-first approach streamlines operations, enabling developers and DevOps teams to focus on application performance rather than OS management.

    Atomic Updates Managing OS updates is a common pain point in production environments. Bottlerocket simplifies this process with its image-based atomic update mechanism, which differs from traditional OS update methods that often involve package-level updates. With traditional approaches, updates can be inconsistent, leading to dependency issues or partial updates that destabilize the system. Bottlerocket’s image-based updates, on the other hand, apply changes in a single, atomic operation, ensuring consistency and making it easier to roll back in case of errors. This approach not only improves reliability but also minimizes downtime, which is critical for maintaining production workloads. This approach ensures that updates are applied in one go, reducing the risk of partial updates that could destabilize the system. Atomic updates also minimize downtime, as the entire system can be rolled back to a previous version in case of any issues. This consistency in updates improves reliability and simplifies maintenance.

    Built-in Security Features Security is a top priority in containerized environments, and Bottlerocket addresses this with several built-in features. The OS uses a read-only root filesystem, which significantly reduces the risk of unauthorized changes. For instance, during one of my deployments, I realized that having a read-only root filesystem prevented a malicious script from overwriting critical system files. This feature ensures that even if an attacker gains limited access, they cannot easily tamper with the OS or compromise its integrity. Additionally, SELinux is enforced by default, providing mandatory access controls that enhance security. Bottlerocket’s minimalist design reduces the number of components, thereby limiting potential vulnerabilities and making it easier to secure the environment.

    Integration with AWS Ecosystem For businesses already leveraging AWS services, Bottlerocket offers seamless integration with tools like Amazon EKS, ECS, and AWS Systems Manager (SSM). This tight integration simplifies deployment and management, allowing teams to use familiar AWS interfaces to control and monitor their containerized workloads. This makes Bottlerocket an ideal choice for organizations heavily invested in the AWS ecosystem.

    Open-Source and Extensible As an open-source project, Bottlerocket is accessible to developers who want to customize it to suit their specific needs. The community-driven nature of the project ensures regular updates, improvements, and a robust support network. Businesses can extend Bottlerocket’s functionality or adapt it to unique requirements, providing flexibility for a wide range of use cases.

    Why Use AWS Bottlerocket?

    Enhanced Security The OS’s design prioritizes security by reducing potential vulnerabilities through its minimalistic architecture and advanced security features. This makes it a safer choice for running containerized workloads in environments where data protection is critical.

    Operational Efficiency With features like atomic updates and AWS integration, Bottlerocket reduces the operational complexity associated with managing containerized environments. This enables teams to focus on scaling and optimizing their applications rather than spending time on infrastructure management.

    Optimized for Containers Unlike traditional operating systems that cater to a broad range of applications, Bottlerocket is purpose-built for containers. This specialization results in better performance, streamlined workflows, and fewer compatibility issues, making it ideal for containerized applications.

    Cost Savings By simplifying operations and reducing downtime, Bottlerocket helps businesses save on operational costs. Its integration with AWS services further reduces the need for additional tools and infrastructure, offering a cost-effective solution for containerized environments.

    Community and Support As an AWS-supported project with an active community, Bottlerocket benefits from continuous improvements and a wealth of resources for troubleshooting and customization. This ensures businesses can rely on a stable and evolving platform.

    Who Should Use AWS Bottlerocket?

    • Startups and Enterprises: Businesses looking for a secure, efficient, and scalable OS for containerized applications.
    • DevOps Teams: Teams aiming to simplify container orchestration and management.
    • Cloud-Native Developers: Developers building applications specifically for Kubernetes or Amazon ECS.

    Integrating AWS Bottlerocket into existing development workflows was a surprisingly smooth process for me. That said, it wasn’t entirely without challenges. Initially, I struggled with ensuring Bottlerocket’s SELinux policies didn’t conflict with some of my custom container images. Debugging these issues required a deep dive into policy configurations, but once resolved, it became a learning moment that improved my security posture. Another hurdle was aligning Bottlerocket’s atomic update process with my CI/CD pipeline’s tight deployment schedules. After a bit of fine-tuning and scheduling updates during lower-traffic periods, I was able to integrate Bottlerocket without disrupting workflows. These challenges, while momentarily frustrating, were ultimately outweighed by the long-term operational benefits Bottlerocket provided. Since Bottlerocket is designed with container-first principles, it fit seamlessly into my ECS setups (yes, I do not have a production Kubernetes cluster in my personal account 😀 ). I started by using Bottlerocket on nodes in my test Amazon EKS setups, and its built-in compatibility with AWS Systems Manager made the configuration and monitoring straightforward. The atomic update mechanism also helped ensure that updates to the OS didn’t disrupt ongoing workloads, a critical feature for me and anyone’s CI/CD pipelines. Adopting Bottlerocket didn’t just simplify OS management—it also improved security and reduced the operational overhead I used to deal with when managing traditional operating systems in containerized environments.

    AWS Bottlerocket is a game-changer for containerized environments because it combines a purpose-built design with exceptional security and operational benefits. Its seamless integration with AWS tools, support for atomic updates, and container-first architecture make it stand out from traditional operating systems. By reducing operational overhead and improving reliability, Bottlerocket addresses key challenges faced by teams managing containerized workloads. These unique features make it an excellent choice for developers and organizations looking to optimize their containerized application environments. Its purpose-built nature, combined with security and operational benefits, makes it an excellent choice for organizations leveraging containers. Whether you’re running workloads on Amazon EKS, ECS, or other Kubernetes environments, Bottlerocket is worth considering for your next project.

    Sources:

  • Mastering AWS Backups: DORA Compliance with Robust Backup & Restoration Strategies – Part 4

    Mastering AWS Backups: DORA Compliance with Robust Backup & Restoration Strategies – Part 4

    In Part 1, Part 2, and Part 3, I covered the legal basis, backup strategy, policy implementation, locking the recovery points stored in the vault, and applying vault policy to prevent specific actions.

    In this part, I will dive deeply into two essential compliance-related topics: Legal Holds and Audit Manager.

    Legal Holds

    AWS Backup Legal Holds are designed to help comply with legal and regulatory requirements by preventing the deletion of recovery points that may be needed for legal purposes, such as audits, investigations, or litigation. Legal holds are the assurance that critical recovery points are retained and protected from being accidentally or intentionally deleted or altered. At first, this feature might sound similar to the Vault Lock feature, which also prevents deletion of the recovery points if the Vault is in compliance mode. The differences are:

    1. Legal Holds can be modified or removed by a user with the proper privilege
    2. Legal Holds tie the recovery point to a date range despite the lifecycle
    3. Legal Holds can be applied to both Vaults and resource types categorically
    4. You are limited to 50 scopes per legal hold

    The Legal Holds’ date range might initially seem confusing, considering that the recovery points continue to be added to a specific vault. But the use case for Legal Holds differs!

    Legal Holds can be helpful in use cases like the retention of specific resource types or recovery points stored in a particular vault to prevent them from being deleted if they backup was taken within the range. The Legal Holds date range is set in order to avoid the deletion of backups despite the lifecycle if the backup was taken within that range. For example, a data breach occurs, and the bank must investigate and report on the incident. Backups related to the breach are stored in an S3 bucket; the database snapshot and EBS volumes are all stored in the Daily vault and need to be preserved for both internal review and external reporting to regulators for the next two years from the date of the incident. In this scenario, Legal Holds can be used to protect the recovery points related to the investigation.

    From the Backup console sidebar menu, navigate to Legal Holds and add a new Legal Hold.

    All backups taken from 24.09.2024 until 03.10.2024 will be retained until the legal hold is removed.

    It is also possible to add tags when creating a legal hold to make the protected resources more easily identifiable.

    Audit Manager

    AWS Backup Audit Manager was announced in 2021, and it is one of the most critical features for legal compliance and reporting on cloud infrastructure backup protection. Without the Audit Manager, a company must implement custom tools and scripts to provide a similar report to auditors and regulators.

    Firstly, AWS Config must be enabled for Audit Manager Framework to function. The reason for requiring AWS Config is that the resource changes are tracked via Config, including resources deployed in an account, resources that are a part of a backup plan, etc.

    On the home page of Audit Manager Frameworks, you will see pretty good how-to-start steps:

    Before creating a framework, let’s look at another feature of Audit Manager called Reports.

    Report plans allow you to create recurring and on-demand reports to audit your backup activity, including cross-account and cross-Region reports.

    Not all reports require a Framework. Here is how they work:

    The two Compliance reports will report on the state of resources in conjunction with the pre-defined framework. You can look at the compliance framework as a representation of the organization’s backup policy written in a document.

    Let’s create a Framework to understand it better. I set a name for the Framework called PolicyComplianceFramework. There are 11 controls that can be configured:

    1. Resources are protected by a backup plan
    2. Backup plan minimum frequency and minimum retention
    3. Vaults prevent manual deletion of recovery points
    4. Recovery points are encrypted
    5. Minimum retention established for recovery point
    6. Cross-Region backup copy scheduled
    7. Cross-account backup copy scheduled
    8. Backups protected by AWS Backup Vault Lock
    9. Last recovery point created – new
    10. Restore time for resources meet target – new
    11. Resources are inside a logically air-gapped vault – new

    As you can see, the controls cover a reasonably wide range of evaluations. Each control can be configured independently based on a specific resource type, resource tag, or even a single resource.

    I made some changes to the control settings to meet my backup policy compliance report requirements:

    As you can see, I disabled these three controls and why:

    1. Vaults prevent manual deletion of recovery points.
      • Evaluates if backup vaults do not allow manual deletion of recovery points with the exception of certain IAM roles.
      • Why disabled? All vaults must have a lock enabled in compliance modes, which does not allow deletion by design.
    2. Cross-account backup copy scheduled
      • Evaluates if resources have a cross-account backup copy configured.
      • Why disabled? AWS Backup does not support cross-account AND cross-region copies of the recovery points simultaneously. I am copying the recovery points to another region, and as such, no cross-account copy is possible.
    3. Resources are inside a logically air-gapped vault – new
      • Evaluates if resources have at least one recovery point copied to a logically air-gapped vault within the past 1 day.
      • Why disabled? I am not using air-gapped vaults as I prefer to control the recover points cross-region copy and its storage location.

    After the framework is created, it will take some time to aggregate and evaluate the resources. As a reminder, frameworks cannot function without AWS Config.

    Now that the audit framework is created, I will explain the Backup Report Plans.

    In almost every certification, and indeed, DORA included, the auditors will ask for a backup job report. A backup job report is one way to automate the report generation and ease the auditing process. You will need to decide which accounts and OUs are included in the job report and also need to create an S3 bucket to store the automatically generated reports.

    With this report plan created, I can now provide the backup job success/failure report to auditors anytime. Also, I will make one more report that is linked to the framework that was created.

    This report will be a compliance report to ensure that the resources are in compliance with all the resources covered in the backup policy. Similarly, you must configure the OUs and accounts in the report and an S3 bucket to store the daily report. The report refreshes every 24 hours.

    In this part, we reviewed how AWS Backup Audit Manager works and how Legal Holds can be handy in specific use cases. Finally, we generated audit reports based on the created framework and stored the reports in an S3 bucket.

    As a reminder, here is what the target architecture diagram looks like:

    In the next part, I will elaborate on AWS Backup Restore Testing.

    End of Part 4 – Stay tuned!

  • Mastering AWS Backups: DORA Compliance with Robust Backup & Restoration Strategies – Part 3

    Mastering AWS Backups: DORA Compliance with Robust Backup & Restoration Strategies – Part 3

    In Part 1 and Part 2, I covered the legal basis, backup strategies, and retention and briefly touched on vaults. In this part, I will cover policy creation in the root or backup delegated account, configuring Vaults & its compliance lock, legal holds, and resource selection.

    Vaults Configuration

    To begin with, let’s create three standard vaults for all the recovery points: daily, monthly, and yearly.

    We’ll need to repeat this step for monthly and yearly vaults.

    Once the vaults are created, we can add specific policies to them. Let’s not forget that one of DORA’s requirements is about who can access the recovery points, which also includes who can delete them. Click on all the vaults, edit the access policy, and add this policy:

    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Effect": "Deny",
                "Principal": {
                    "AWS": "*"
                },
                "Action": [
                    "backup:DeleteBackupVault",
                    "backup:DeleteBackupVaultAccessPolicy",
                    "backup:DeleteRecoveryPoint",
                    "backup:StartCopyJob",
                    "backup:UpdateRecoveryPointLifecycle"
                ],
                "Resource": "*"
            }
        ]
    }

    As you can see, the policy denies anyone deleting the recovery points, starting a copy job (be it in the same account or any account outside the organization), and it prevents modification of recovery point lifecycle.

    Vault Compliance Lock

    Firstly, let’s explain what is Vault Compliance Lock, its variants, and features:

    Backup vault is a container that stores and organizes your backups. When creating a backup vault, you must specify the AWS Key Management Service (AWS KMS) encryption key that encrypts some of the backups placed in this vault. Encryption for other backups is managed by their source AWS services.

    What is Vault Lock?

    A vault lock enforces retention periods that prevent early deletions by privileged users, such as the AWS account root user. Whether you can remove the vault lock depends on the vault lock mode.

    How many modes of Vault Locks are there:

    • Vaults locked in governance mode can have the lock removed by users with sufficient IAM permissions.
    • Vaults locked in compliance mode cannot be deleted once the cooling-off period (“grace time“) expires if any recovery points are in the vault. During grace time, you can still remove the vault lock and change the lock configuration.

    Governance Mode allows authorized users some flexibility to modify or delete backups, while Compliance Mode locks down backups completely, preventing any changes until the retention period is over. Compliance Mode offers stricter control and is often used in environments requiring regulatory compliance. In contrast, the Governance Mode is more suitable for operational governance, where authorized personnel may still need to manage backups.

    Let’s enable the Vault Lock in compliance mode. Why compliance? Because it simplifies the audit process and demonstrates the deletion protection, the auditors will not request additional proof if they are not already familiar with AWS Backup features.

    Once the vault lock is created, it enters the state of “Compliance lock in grace time”. The message indicates that the lock can be deleted within the next three days.

    Backup Policy Creation

    As explained previously, you have two options:

    1. Manage cross-account backup policies from the root account
    2. Delegate an account to administer the backup policies

    I will use the root account for policy creation as it is my test organization account. Still, it would be best if you tried to have a delegated administrator to avoid using the root account for day-to-day operations.

    Click on Backup policies from the AWS Backup console under the “My Organization” section, then click “Create backup policy.” I will create two similar backup policies except for the retention period:

    1. 30_365_2555
    2. 30_365_3652

    You might wonder how many backup policies you would need to create if you have many cases in your organization due to different retentions per resource or the data the resource contains. Suppose you require many policies due to retention requirements, as explained previously, then using infrastructure as code is a must, and a Terraform object and a loop would create policies for you with a single resource.

    In the screenshots you can see that my backup policy and plan within the policy is named after the retention tag structure that I have created. Followed by, the backup rule which retains the daily backups for 35 days and replicates a copy of the recovery points into another vault in a different region or account.

    IMPORTANT: as of now, AWS Backup does not support the replication/copy of recovery points in cross-region AND cross-account. You must decide which one is more important.

    In the architecture diagram, which I will reshare at the bottom of the article, you can see that the replication plan is to implement eu-west-1 or Sovereign Cloud. AWS European Sovereign Cloud is a new isolated region that is being launched next year and is meant to be used for highly regulated entities and governments within Europe. You can read more about it here: https://aws.amazon.com/compliance/europe-digital-sovereignty/

    In the resource assignment section, I choose the default service role and use the backup key of “backup” with the value “30_365_2555”.

    With this policy created, I can now back up all resources across my AWS Organization that are tagged with the proper tag key and value and are attached to the backup policy. Simple!

    Continuous Backup

    AWS Backup supports a feature called Continuous Backup. It provides point-in-time recovery (PITR) for Aurora, RDS, S3, and SAP HANA on Amazon EC2 resources. Continuous Backup continuously tracks changes to the resources and enables the ability to restore them to any specific second within a defined retention period. This significantly reduces data loss (close to none) during an incident.

    A few important points to remember:

    • Continuous backup is generally safer because it captures data changes continuously and allows for recovery at any specific point in time, offering better protection against data loss between scheduled snapshots.
    • Snapshot backups are typically faster to recover because they represent a complete, point-in-time state of the resource, so there’s no need to reconstruct incremental changes.
    • Continuous backups are limited to 35 days.

    Policy Attachment

    AWS Backup Policy Attachment allows us to assign the backup plans to resources running across the AWS Organization OUs or Accounts. By attaching the backup policy to an OU within the organization called “Prod”, all my resources are not being backed up.

    In this part, we have created the vaults, vault lock, and backup policy and finally attached the policy to specific OU targets.

    As a reminder, here is what the target architecture diagram looks like:

    In the next part, I will elaborate on AWS Backup Legal Holds, Audit Manager,

    End of Part 3 – Stay tuned!

  • My CloudWatch testimonial on AWS website

    My CloudWatch testimonial on AWS website

    I was asked to provide a testimonial for CloudWatch due to the level of usage and the way it was architected and used in the organization, and here is where you can see my words:

    https://aws.amazon.com/cloudwatch/customers

    Screenshot:

    M. Reza Ganji AWS Testimonial

  • Mastering AWS Backups: DORA Compliance with Robust Backup & Restoration Strategies – Part 2

    Mastering AWS Backups: DORA Compliance with Robust Backup & Restoration Strategies – Part 2

    In Part 1, I emphasized DORA’s requirements and the overall architecture of resource backup within an organization. In this part, I will focus on backup initiation strategies, vaults, retention of the recovery points, and tagging policy.

    Backup Strategies

    If the resources in your AWS Organization are managed via code, aka infrastructure as code, you are on good terms. Otherwise, you will need to spend some time categorizing and structuring your resources based on their type and data retention.

    First, let’s define the retention of the resources and its data based on the legal requirements. For example, as a financial entity, you must retain specific data about the customer or its transactions for between 7 and 35 years! This would mean the data deletion process, which is also a GDPR requirement, must be in alignment with the data backup process; otherwise, you will end up retaining backups that do not contain all the customer data that is legally needed.

    To make the GDPR relation with a backup more understandable, look at the timeline below:

    Now, let’s review the process:

    1. You take daily, monthly, and yearly backups. Daily recovery points are retained for 30 days, monthly for 12 months, and yearly for 35 years.
    2. Every team has a pre-defined process for triggering the data deletion from a notification received from an event bus every 35 days.
    3. The customer’s personally identifiable data gets deleted, and you have the customer data in the database only for another 30 days since the daily backup recovery points are the only item containing the customer data.

    What I mentioned above as an example scenario is a highly misaligned plan of action in a financial institution, but it can happen! To stay compliant and retain the data, nullifying the customer’s PII data is always easier than deleting it. Retaining the customer data in warm storage of the production database without needing it is not exactly ideal. Still, if you do not have a properly structured data warehouse that complies with the regulatory requirements and builds for compliance needs, then you do not have much of a choice.

    Now that you understand the relationship between GDPR data deletion and backups and how you should consider it, we will move on to the backup policy.

    In my view, AWS Backup is one of the best solutions AWS has released in the storage category for compliance and operation. You can operate AWS Backup well within the root account or delegate an administrator to a dedicated backup account to limit the root account usage and exposure for best practices. The architecture diagram I provide would work perfectly with either of them.

    The goal is to create a backup policy that controls the resources deployed in any AWS organization account. A backup policy based on a legal requirement will likely be set to back up resources across multiple AWS accounts. Thus, numerous backup policies with different sets of rules are needed to satisfy the legal and compliance needs.

    In this scenario, let’s assume we only need to create two backup policies: one with seven years of yearly retention (let’s call it transactions db) and another with ten years (rewards db). Both the daily and monthly backup policies are identical.

    dailymonthlyyearly
    transactions db303652555
    rewards db303653652
    DB retentions in days

    AWS Backup policy that is a part of the Organization policy only supports tagging. This means a tag-based policy is your best friend for implementing backups cross-account.

    dailymonthlyyearlytag
    transactions db303652555db_30_365_2555
    rewards db303653652db_30_365_3652
    DB retentions in days

    If you look at the image above, you see that the restore testing candidate is true, but it is not part of the tag. That is because a separate tag key and value will be used for automated restore testing, which is also a DORA requirement.

    Backup Vaults

    There are three retention categories defined in the backup strategy daily, monthly, and yearly.

    What is AWS Backup Vault?

    AWS Backup Vaults are secure storage containers (virtually) within the AWS Backup service, designed to store backup copies of AWS resources such as Amazon EC2 instances, Amazon RDS databases, Amazon EBS volumes, and more. Vault provides a centralized, organized location for managing and protecting the recovery points.

    Key Features of AWS Backup Vaults:

    1. Secure Storage:
      • Backup vaults offer encryption for backups at rest using AWS Key Management Service (KMS) encryption keys. You can specify a KMS key to encrypt your backups for added security.
    2. Centralized Management:
      • Backup vaults help in managing backups from various AWS services in a centralized place, simplifying the backup management process across multiple AWS accounts and regions.
    3. Cross-Region and Cross-Account Backups:
      • AWS Backup allows you to create backup copies in different regions (cross-region backups) and share them across different AWS accounts (cross-account backups) for improved resilience against regional failures and data loss.
    4. Access Control:
      • Backup vaults are integrated with AWS Identity and Access Management (IAM), allowing you to control who can access and manage backups stored in the vault. You can define detailed policies for who can create, restore, or delete backups.
    5. Retention and Lifecycle Policies:
      • You can define retention policies for backups stored in the vault, specifying how long backups should be retained before they are automatically deleted. This helps in compliance with data retention regulations like DORA.
    6. Monitoring and Audit:
      • AWS Backup integrates with AWS CloudTrail, providing detailed logs of backup operations such as creation, deletion, and restoration of backups. This enables auditing and tracking of all backup activities.
    7. Immutable Backups:
      • AWS Backup Vault Lock is a feature that allows you to make backups immutable, meaning they cannot be deleted or altered for a specified period, which helps in protecting against accidental or malicious data loss (useful for compliance with regulations).

    As a reminder, here is what the target architecture diagram looks like:

    In the next part, I will elaborate on the backup policy creation, legal hold, and vault lock.

    End of Part 2 – Stay tuned!

  • Mastering AWS Backups: DORA Compliance with Robust Backup & Restoration Strategies – Part 1

    Mastering AWS Backups: DORA Compliance with Robust Backup & Restoration Strategies – Part 1

    When discussing cloud resource backup and restoration, there are many ways to handle them. You may wonder what the best way is! Should you use backup and restoration software that you might be already familiar with from your on-premises data center, like Vaeem? Or should you consider using software built in the age of cloud-native solutions?

    You will find the answers to those questions in this post. I tried to simplify the selection process based on organizational needs. If you must comply with DORA, this is the right stop for you. If you do not have to comply with DORA and you still want to take control of your backups in a comprehensive manner, then you are doing the right thing, as anything can happen at any time, especially with ever-changing resources running on the cloud.

    The Digital Operational Resilience Act (DORA) outlines specific requirements for financial entities regarding the resilience of their ICT systems, including clear mandates for resource backup, backup testing, restoration, auditing, and retention. Here is a summary of the exact DORA requirements related to these aspects, along with references to their respective clauses, I will then explain how to meet each of these requirements:

    1. Backup Requirements

    • Regular backups: Financial institutions must ensure regular backups of critical data and ICT resources. Backups should be done frequently and stored in secure locations.
      Reference: Article 11, paragraph 1(b).
    • Data availability: Backup systems should ensure that critical data and systems remain available, even in cases of severe operational disruptions.
      Reference: Article 11, paragraph 2.

    2. Backup Testing

    • Regular testing of backups: Backups should be tested periodically to ensure that the data can be recovered effectively. This includes testing for data integrity, recovery procedures, and accessibility of critical systems during potential incidents.
      Reference: Article 11, paragraph 1(e).
    • Simulated disaster recovery: Financial entities must simulate disaster recovery scenarios to ensure that they can recover critical functions and data within the required timeframes.
      Reference: Article 11, paragraph 4.

    3. Restoration Requirements

    • Timely restoration: Procedures for restoring critical data and ICT systems must be in place to ensure operational continuity within predefined recovery time objectives (RTOs).
      Reference: Article 11, paragraph 1(d).
    • Recovery point objectives (RPOs): Institutions should set and maintain appropriate recovery point objectives to minimize data loss during recovery.
      Reference: Article 11, paragraph 2(b).

    4. Audit Requirements

    • Audit trail: Financial institutions must maintain a comprehensive and secure audit trail of backup processes, testing procedures, and restoration activities to ensure traceability and accountability.
      Reference: Article 16, paragraph 1.
    • Third-party audits: For outsourced backup or recovery services, financial entities must ensure that third-party providers also comply with audit requirements and that their performance is regularly reviewed.
      Reference: Article 28, paragraph 4.

    5. Retention Requirements

    • Retention policies: Financial institutions must establish clear data retention policies for backup data, aligned with legal and regulatory obligations. These policies should ensure that data is retained for a period long enough to address operational and legal requirements, while also considering data minimization principles.
      Reference: Article 11, paragraph 1(c).
    • Data deletion: Procedures must be in place to securely delete or dispose of backup data once the retention period has expired, in compliance with data protection laws (such as GDPR).
      Reference: Article 11, paragraph 5.

    6. Backup Security

    • Access control: Backups must be protected by strong access control measures to prevent unauthorized access, alteration, or deletion. This includes encryption of backup data both in transit and at rest.
      Reference: Article 11, paragraph 1(e).
    • Physical security: If backups are stored off-site or in a separate physical location, financial entities must ensure that these locations are secure and comply with applicable security requirements.
      Reference: Article 11, paragraph 1(b).

    7. Redundancy and Geographical Distribution

    • Geographically diverse backup locations: Backups should be stored in geographically diverse locations to ensure that natural disasters or regional disruptions do not affect the ability to recover critical data.
      Reference: Article 11, paragraph 1(b).
    • Redundant backup infrastructure: Institutions should maintain redundant backup systems to ensure continuous availability even if the primary backup system fails.
      Reference: Article 11, paragraph 1(e).

    8. Backup and Disaster Recovery Plan Integration

    • Integration with disaster recovery plans: Backups must be integrated into the broader disaster recovery and business continuity plans. This includes ensuring backup procedures and recovery times align with overall incident response and resilience strategies.
      Reference: Article 11, paragraph 1(d).

    9. Outsourcing Backup Services

    • Oversight of third-party providers: If backups are outsourced to third-party service providers (e.g., cloud providers), financial entities must ensure that the provider adheres to the same DORA requirements and regularly assess the provider’s performance, including backup reliability and security.
      Reference: Article 28, paragraph 2.

    10. Backup Documentation

    • Documented backup processes: Institutions are required to document their backup strategies, including frequency, testing schedules, and recovery procedures. Documentation should be kept up to date and accessible to relevant personnel.
      Reference: Article 11, paragraph 1(f).

    11. Incident Notification

    • Reporting backup failures: Any incident in which backup or recovery processes fail must be reported to the appropriate authorities as part of DORA’s incident reporting requirements.
      Reference: Article 19, paragraph 1.

    12. Continuous Monitoring and Improvement

    • Continuous monitoring of backup systems: Financial entities must continuously monitor the effectiveness of their backup systems and adjust processes to address evolving risks or operational changes.
      Reference: Article 12, paragraph 1.

    First things first, I am assuming a few things that I will list down here:

    1. Multiple AWS accounts are running in a multi-account architecture
    2. All resources must be backed up
    3. You have control over all the resources that need to be backed up
    4. You know the data that is stored in the databases well and its legal retention requirement

    Based on the requirements of DORA together with the assumption on how your infrastructure would look like, let’s draw a visual for better understanding:

    In the next parts I will explain how to implement each of the components that are in the diagram above and how to meet the DORA requirements.

    End of Part 1 – Stay tuned!

  • Driving Success with Large-Scale DevSecOps Solutions: A Techie’s Journey

    Driving Success with Large-Scale DevSecOps Solutions: A Techie’s Journey

    Mastering DevSecOps: Leveraging Business and Technical Acumen

    Developing and implementing large-scale DevSecOps solutions demands a harmonious blend of business acumen and technical expertise. The primary goal is to integrate security seamlessly into the DevOps process, ensuring that security considerations are embedded throughout the development lifecycle. This holistic approach to security mitigates potential vulnerabilities early in the development phase, reducing risks and enhancing the overall resilience of the software.

    One of the critical aspects of successful DevSecOps implementation is the ability to combine business knowledge with technical skills. This dual approach enables the identification of complex issues and the generation of innovative solutions that are both technically sound and aligned with business objectives. For instance, understanding the business impact of security breaches allows for prioritizing security measures that protect critical assets while maintaining operational efficiency.

    Strategic initiatives play a pivotal role in delivering effective DevSecOps solutions. These initiatives often involve the adoption of advanced security tools, the establishment of robust security protocols, and the continuous monitoring of the development environment. By leveraging these strategies, organizations can achieve a higher level of security compliance and operational excellence. Examples of successful implementations include automated security testing, which integrates security checks into the CI/CD pipeline, and the deployment of security information and event management (SIEM) systems for real-time threat detection and response.

    Proactive collaboration with cross-functional teams is essential for driving competitive advantage in DevSecOps. This collaboration ensures that all stakeholders, from developers to security analysts, are aligned and working towards a common goal. Methodologies such as Agile and Scrum facilitate this integration by promoting continuous communication and iterative progress. Tools like Jira and Confluence support these methodologies by providing platforms for tracking progress and sharing knowledge, ensuring that security is a shared responsibility across the organization.

    In conclusion, mastering DevSecOps involves a meticulous balance of business insight and technical prowess. Through strategic initiatives and collaborative efforts, organizations can build secure, resilient software systems that meet both business objectives and security standards, ultimately driving success in a competitive landscape.

    Building Stakeholder Relationships: Meeting Needs and Ensuring Compliance

    In the realm of large-scale DevSecOps solutions, the importance of fostering robust stakeholder relationships cannot be overstated. Engaging stakeholders effectively is paramount to ensuring their needs and expectations are met, while simultaneously adhering to stringent regulatory standards. The commitment to cultivating these relationships begins with a transparent and goal-oriented approach, laying a foundation of trust and reliability.

    One of the primary strategies employed to communicate effectively with stakeholders is the establishment of regular, structured interactions. This includes frequent meetings, detailed progress reports, and open channels of communication. By maintaining this level of engagement, stakeholders are kept informed about the developments and any potential challenges that may arise. These interactions are critical in aligning the project’s objectives with stakeholder expectations, thereby minimizing discrepancies and fostering a sense of partnership.

    Managing stakeholder expectations is another crucial aspect. This involves a meticulous understanding of their requirements and the constraints within which the project operates. By setting realistic goals and delivering on promises, the trust and confidence of stakeholders are reinforced. The author’s approach to managing these expectations is characterized by their ability to balance immediate needs with long-term objectives, ensuring that the solutions provided are not only compliant but also sustainable.

    Compliance management is seamlessly integrated into the stakeholder engagement process. Through a proactive stance on regulatory adherence, stakeholders are assured that their investments are protected from legal and operational risks. This is achieved by staying abreast of the latest regulatory changes and embedding compliance into every phase of the DevSecOps lifecycle. Examples of successful engagements include instances where the author has navigated complex regulatory landscapes to deliver solutions that are both innovative and compliant, thereby exceeding stakeholder expectations.

    Ultimately, the goal is to deliver exceptional service, resources, and methods that align with the business objectives of stakeholders. By adopting a strategic and empathetic approach to stakeholder relationships, the author demonstrates a profound understanding of their needs, ensuring that each engagement is a step towards collective success.

  • Presented at Financial Services Industry (FSI) Forum Berlin hosted by AWS

    Presented at Financial Services Industry (FSI) Forum Berlin hosted by AWS

    Satyajit Ranjeev and I had the pleasure of attending the Amazon Web Services (AWS) FSI Forum in Berlin. The AWS FSI Forum is dedicated to inspiring and connecting the Financial Services community in Germany, Austria and Switzerland.

    M. Reza and Satyajit took to the stage to share their insights from Solaris’ migration journey to AWS, highlighting how it has fundamentally transformed our infrastructure and service security from the ground up, improving our foundational technologies and fostering an architectural evolution.

    “Our deeper engagement with AWS technologies has enabled us to develop solutions that benefit our partners and customers alike. An added advantage of our AWS integration has been our ability to manage and even reduce the costs associated with infrastructure and maintenance without compromising service quality.”

    https://www.linkedin.com/posts/solariscompany_awsforum-fsiforumberlin-cloud-activity-7202238505832382464-2UrH

  • Configuring AWS Control Tower with AWS SSO and Azure AD

    Configuring AWS Control Tower with AWS SSO and Azure AD

    Limitations:

    AWS SSO Limitations:

    • a. AWS SSO can only be used with AWS Control Tower in the same AWS Region.
    • b. AWS SSO can be associated with only one AWS Control Tower instance at a time.
    • c. AWS SSO can only federate with one external identity provider (IdP) at a time.

    Azure Active Directory (AAD) Limitations:

    • a. Azure AD can only be used as an external identity provider (IdP) with AWS SSO, which then integrates with AWS Control Tower.
    • b. Azure AD must be configured as a SAML-based IdP to integrate with AWS SSO.
    • c. There might be certain limitations or restrictions specific to Azure AD features or configurations when used in conjunction with AWS SSO.

    Control Tower Limitations:

    • a. Control Tower supports only SAML-based federation for single sign-on (SSO) with AWS SSO.
    • b. Control Tower doesn’t support other identity federation protocols like OpenID Connect (OIDC).
    • c. Control Tower currently supports only one AWS account as the management account.

    Miscellaneous Limitations:

    • a. Ensure compatibility of SAML versions between AWS SSO and Azure AD. AWS SSO supports SAML 2.0, but Azure AD might support multiple versions. Verify compatibility and adjust SAML configurations accordingly.

    Considerations:

    When configuring AWS Control Tower with AWS SSO and Azure Active Directory (AAD), there are several considerations to keep in mind:

    1. Identity Source and User Management:
      • a. Decide on the primary identity source for user management. In this case, it would be either AWS SSO or Azure AD. Consider factors such as user provisioning, synchronization, and group management capabilities each identity source provides.
      • b. Determine how AWS SSO and Azure AD will synchronize user accounts and groups. This can be done manually or by leveraging automation tools like AWS Directory Service or Azure AD Connect.
    2. SAML Configuration:
      • a. Ensure that the SAML configurations between AWS SSO and Azure AD are compatible. Verify the SAML versions supported by each service and adjust the configuration accordingly.
      • b. Pay attention to the SAML attributes and claims mapping to ensure that user attributes like usernames, email addresses, and roles are correctly mapped and passed between the services.
    3. Security and Access Control:
      • a. Define appropriate access controls and permissions for users and groups in both AWS SSO and AWS Control Tower. This includes assigning roles and policies within AWS Control Tower to ensure proper access to resources and guardrails.
      • b. Implement multi-factor authentication (MFA) to enhance security for user access to AWS Control Tower and associated AWS accounts.
      • c. Regularly review and update user access permissions as needed, especially when user roles or responsibilities change.
    4. Regional Considerations:
      • a. Keep in mind that AWS SSO and AWS Control Tower need to be set up in the same AWS Region. Consider the availability and performance requirements of your AWS resources and choose the appropriate AWS Region for deployment.
      • b. Consider any data residency or compliance requirements when selecting the AWS Region and configuring AWS Control Tower and associated services.
    5. Monitoring and Auditing:
      • a. Implement logging and monitoring solutions to track user access, changes to permissions, and any suspicious activities within AWS Control Tower and associated AWS accounts.
      • b. Regularly review audit logs and generate reports to ensure compliance with security and regulatory requirements.
    6. Documentation and Training:
      • a. Document the configuration steps, settings, and any customizations made during the integration process for future reference.
      • b. Provide training and guidance to users, administrators, and support teams on using and managing AWS Control Tower, AWS SSO, and Azure AD integration.

    Configuration:

    To configure AWS Control Tower with AWS Single Sign-On (SSO) and Azure Active Directory (AAD), you need to follow these steps:

    1. Set up AWS Control Tower:
      • a. Log in to your AWS Management Console.
      • b. Navigate to the AWS Control Tower service.
      • c. Follow the provided documentation or wizard to set up AWS Control Tower in your AWS account. This includes setting up the Control Tower lifecycle, organizational units (OUs), and guardrails.
    2. Set up AWS SSO:
      • a. Navigate to the AWS SSO service in the AWS Management Console.
      • b. Follow the documentation or wizard to set up AWS SSO in your AWS account.
      • c. Configure user attributes and identity sources as required.
    3. Set up Azure Active Directory (AAD):
      • a. Log in to the Azure portal.
      • b. Navigate to Azure Active Directory.
      • c. Follow the documentation or wizard to set up Azure AD in your Azure subscription.
      • d. Configure user attributes and identity sources as required.
    4. Set up federation between AWS SSO and AAD:
      • a. In the AWS SSO console, go to Settings.
      • b. Under Identity Source, choose “Add new identity source.”
      • c. Select “SAML” as the type and provide a name for the identity source.
      • d. Download the AWS SSO metadata file.
      • e. In the Azure portal, go to Azure Active Directory.
      • f. Navigate to the Enterprise applications section and select “New application.”
      • g. Choose “Non-gallery application” and provide a name for the application.
      • h. Under Single sign-on, select SAML.
      • i. Upload the AWS SSO metadata file.
      • j. Configure the SAML settings according to the AWS SSO documentation.
      • k. Save the SAML configuration.
    5. Assign users and groups to AWS Control Tower:
      • a. In the AWS SSO console, go to the AWS accounts tab.
      • b. Select the AWS Control Tower account and click on “Assign users/groups.”
      • c. Choose the appropriate users and groups from the AWS SSO directory.
      • d. Grant the necessary permissions for Control Tower access.
    6. Test the configuration:
      • a. Log in to the Azure portal using an account from AAD.
      • b. Navigate to the AWS Management Console using the AWS SSO link.
      • c. You should be able to access Control Tower resources based on the assigned permissions.
  • This is how we hit the limit in Amazon EFS

    This is how we hit the limit in Amazon EFS

    When we architect the application, it is essential to consider the current metrics and monitoring logs to ensure its design is future-proof. But sometimes, we do not have the necessary logs to make the right decisions. In that case, we will let the application run in an architecture that we think seems optimized – using the metrics we had access to – and let it run for a while to capture logs required to apply necessary changes. In our case, the application has grown to the point where we could not expect it to happen!

    The COVID-19 has increased the consumption of all online applications and systems in the organization, whether modern or legacy software.

    We have an application that is spanned across three availability zones with EFS mount points in each AZ retrieving data from and Amazon Aurora Serverless. In the past two weeks, we realized the application is getting slower and slower. Checking the EC2 and database activity logs did not help, and we learned something could be wrong with the storage. Unexpectedly, the issue was, in fact, caused by the EFS limitation for I/O on general-purpose performance mode.

    In General Purpose performance mode, read and write operations consume a different number of file operations. Read data or metadata consumes one file operation. Write data or update metadata consumes five file operations. A file system can support up to 35,000 file operations per second. This might be 35,000 read operations, 7,000 write operations, or a combination of the two.

    see Amazon EFS quotas and limits – Quotas for Amazon EFS file systems.

    After creating an EFS file system, you cannot change the performance mode, and with having almost 2 TB of data in the file system, we were concerned about the downtime window. AWS suggests using AWS DataSync to migrate the data from either on-premises or any of the AWS storage offerings. Although DataSync could offer help to migrate the data, we already had AWS Backup configured. So, we used AWS Backup to take a complete snapshot of the EFS and restore it as a Max I/O file system.

    Note that Max I/O performance mode offers a higher number of file system operations per second but has a slightly higher latency per each file system operation.

    Moodle Application Architecture

  • Communication and failure in information flow

    Communication and failure in information flow

    Failures of Information Flow

    Communication is the activity or process of expressing ideas and feelings or of giving people information; “an apparent answer to the painful divisions between self and other, private and public, and inner thought and outer word.”. According to the Oxford dictionary, communication means “the activity or process of expressing ideas and feelings or giving people information” (Peters, 1999).

    However, in reality, communication is not just about the transmission of information amongst people. It starts with the willingness to initiate this transmission of info.

    if the transmitter of information does not believe that the information will be actred upon or that it is not wanted, it will not be passed on – motivation to communicate will not exist.

    (Knowles, 2011)

    In an organization, if all levels of hierarchy believe that the top management must make the decisions due to the existence of power right at the top of the ranking, the communication at lower levels will eventually stop, which results in maladjusted decision making at all levels. The transmission of information shall not be considered the escalation of decision making. But instead, the information must be taken seriously, and superiors must not suggest a decision by receiving the passed-on information.

    the organizational structure alone will not ensure a free flow of information – the will to communicate must be there too. This is, therefore, a cultural issue. Everyone in the organization has the responsibility to act on, pass, seek and receive information and there is no possibility to pass the responsibility to anyone else.

    (Knowles, 2011)

    The organization education concerning a free and speedy course of information transmission is the “heart of empowerment.”

    the organization functioning as a holistic process rather than a hierarchy based on functional power bases.

    (Knowles, 2011)

    The information flow and decision-making process mustn’t follow the chain of command at all times. One way to improve the transmission of the knowledge and flow of information is to avoid 1-on-1 email communications and use tools like organization-wide wikis to share meeting notes, decisions, goals, plans, policies, etc.

    References

    Peters, J. D. (1999). Speaking into the air: A history of the idea of communication. Chicago: University of Chicago Press.

    Knowles, G. (2011). Quality Management. Graeme Knowles & Ventus Publishing APS, USA.

  • How to use EFS to store cx_Oracle, Pandas, and other python packages?

    How to use EFS to store cx_Oracle, Pandas, and other python packages?

    This post focuses on how to use the EFS storage to store large packages and libraries like cx_Oracle, pandas, and pymssql and import the packages in AWS Lambda. Considering the Lambda package size limitation that is inclusive of layers, larger functions packages and libraries must be stored outside the Lambda package.

    There are some steps that you do not need to follow as it has been done, and you can mount the EFS to your lambda and import the package. However, I will be logging the steps to ensure we all can reference the steps in the future – technical debt.

    In short:

    1. launched an EC2
    2. created an EFS Storage
    3. SSH to the EC2
    4. Mount the EFS to the EC2
    5. created a sample python venv project
    6. installed all the package requirements I had in the virtual environment
    7. moved the site_packages contents to the EFS/packages directory
    8. created a directory in EFS and called it sl (shared libraries)
    9. moved the libraries including the cx_Oracle the EFS/sl/oracle
    10. created a test function in AWS Lambda using the code below
    11. added an environment variable entry in the AWS Lambda Configuration
    12. and Test

    In Long:

    I will start the details from step 4 onwards:

    mkdir efs mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-6212b722.efs.ap-southeast-1.amazonaws.com:/ efs

    Then I downloaded the latest version of instant client from Oracle website here (Basic Package (ZIP)):

    wget https://download.oracle.com/otn_software/linux/instantclient/211000/instantclient-basic-linux.x64-21.1.0.0.0.zip
    unzip instantclient-basic-linux.x64-21.1.0.0.0.zip

    Renaming the directory to oracle before copying it to the EFS:mv instantclient_21_1 oracle

    creating the necessary directories in EFS

    mkdir -p efs/sl/

    Then moving the oracle instant client to the efs

    mv oracle efs/sl/

    Now I will be creating a python virtual environment in EC2 to download the necessary packages and copy them to the EFS

    python3.8 -m venv venv 
    source venv/bin/activate 
    pip install pandas cx_Oracle pymysql mymssql

    Let’s check the packages in the venv site packages and then copy them to the EFS

    ls venv/lib64/python3.8/site-packages/ 
    mkdir -p efs/packages 
    mv venv/lib64/python3.8/site-packages/* efs/packages/

    At this point, we have python requirements and shared objects/libraries copied to the EFS. Let’s mount the EFS in Lambda and try using the libraries and objects.


    To mount the EFS in AWS Lambda go to Configuration > File systems and click on Add file system.

    Once you select the EFS file system and Access point you will need to enter the Local mount path in the AWS Lambda which must be an absolute path under /mnt. Save the file system and go to the next step.

    You must add the environment variable before moving to the function test.

    To add an environment variable go to Configuration > Environment variables

    Click on the Edit and then Add environment variable and enter the key and value as per below:

    LD_LIBRARY_PATH=/mnt/lib/packages:/mnt/lib/sl/oracle/lib:$LD_LIBRARY_PATH

    ^^^^^^^^^^ Pay attention to the path and joins

    Sample Python code to test the libraries:

    import json 
    import sys 
    import os 
    
    sys.path.append("/mnt/lib/packages") 
    
    import cx_Oracle 
    
    
    conn = 'username/password@123.123.123.123/ORCL' 
    curs = cx_Oracle.Connection(conn) 
    

    You must append the system path with the packages directory from the mount point sys.path.append(“/mnt/lib/packages”).

    Cheers  

  • Microsoft Teams is NOT your next LMS

    Microsoft Teams is NOT your next LMS

    In the past eight months, there has been an astonishing number of changes in how learners receive content and learning materials, as well as the communication methods with instructors. During this period, the changes were far more than what we had seen in the past four to five years combined. Considering how fast institutions had to apply the changes to survive the pandemic, the usual change processes were either not considered or partially ignored.

    Like any IT organization, my team and I had to ensure our clients could keep up with the changes, including the necessary pieces of training, infrastructure demands, new systems, and application installation and configuration.

    In our institution, we have been using Office 365 for many years, and particularly Microsoft Teams was adopted by my team since mid-2018. Initially, it was buggy and limited to a few connectors and APIs, and it grew considerably over time. However, the pandemic made many companies and educators panic and adopted this collaboration tool for many other purposes without many considerations. I had discussions with educators who believed Microsoft Teams can be our institution’s next LMS and started ignoring the policy and were using a collaboration tool for all the learners’ needs.

    Some of the educators that I talked to about using a collaboration tool instead of a full-fledged LMS tend to believe that it’s the outcome that matters and not the device used. There are many reasons that Microsoft Teams cannot replace a tool like Moodle as LMS, and I will be pointing out a few:

    Standardization

    “Standardization is the process of implementing and developing technical standards based on the consensus of different parties that include firms, users, interest groups, standards organizations, and governments. Standardization can help maximize compatibility, interoperability, safety, repeatability, or quality. It can also facilitate commoditization of formerly custom processes” – this is the definition of Standardization from Wikipedia.

    A full-fledged LMS like Moodle provides a smooth experience to all user types for different needs. That includes – but is not limited to – course and content management, support of content standards, reporting & analysis, user control, access management, etc.

    For higher education institutions with complex business processes standardizing the technology, the eco-system is a must. Usually, it requires a long thought process with all stakeholders’ involvement from different teams to meet the requirements prior to finalizing a process to be implemented in the selected software.

    Using a collaboration tool like Microsoft Teams instead of an LMS, the technology stack’s Standardization will get more complicated, although not impossible. A sudden shift from Moodle to Microsoft Teams, without going through impact and risk assessment across the technology stack, can go very wrong in the unseen structures of business application silos used for years.

    Comprehension vs. Fragmentary

    A comprehensive system is a large scale that can be complicated, while if you fragment it into small chunks, although scalable, it will be unpredictable. When you have access to an LMS that can comprehend many kinds of assessments and processes, management forms, and quizzes, it can be a part of the learning flow with no limitations ahead of the instructor. However, in an environment like Microsoft Teams, an examination/quiz is displayed using a connector, or a Microsoft Form. That’s where the fragmentation comes into the picture; not being able to consolidate all the journey into a single platform that collects, grades, and assesses how to help a student to succeed along the way.

    Using an LMS would allow you to embed and blend more than just a form into the learning journey.

    Embed more than just forms and quizzes

    Decentralized

    Reporting and analytics are unavoidable necessities for the learning journey to identify the retention risk. Moodle collects the necessary logs, and after processing the activities, it analyzes and generates reports using a built-in module to identify the students at risk. This analysis is done by an in-depth social breadth / cognitive cross-section matrix analysis.

    Intellectual Property and Course Repository

    Reusability, Content Packaging, and Interactivity

    One of the many and most valuable assets of universities is the content developed by the faculty for instructions. The course materials and contents must be continuously evolved to respond to rapidly changing technology. Instructors who stay away from the LMS tend to share the original content through a third-party collaboration tool like Microsoft Teams. This helps the delivery speed but delays the improvements to be applied across the organization. The content remains in the instructors’ cloud storage, and any change is not reflected in the university content repository in the LMS for reusability. All corrections and improvements will not be shared with learners taking the same course with another instructor.

    Another issue that I have found in Microsoft Teams is providing content packaging and delivery. Instructors can share the content in multiple formats, and it will be stored in the Files tab of the chat, but it does not provide a structure and a path to the learner. Moreover, the learning environment’s interactivity and the content supplied to the student are inadequate and limited to image, text, and video. In contrast, the content development tools like H5P can be used anywhere in the learner’s journey. Additionally, the class recordings will not be available in the LMS to be accessed by the user after completing the class. The class/meeting host shall either download/upload the recording file from the team’s session or provide the streaming link to Microsoft Stream. This would eventually discourage users from interacting with the LMS and making it irrelevant. Also, the students will get more confused about which platform to use for what purpose.

    Tool expert

    Another issue that instructors face during the pandemic is the variety of introduced tools. Educators need to learn how to use a lot of tools that are new in Microsoft 365. For example, using Microsoft Forms for tests and quizzes or using a planner to set deadlines, or even using Microsoft Teams assignment to grade exams and then transferring the results to the campus information system. They have to spend time to learn the tools to be able to teach! But the point here is that the instructor should not be focusing on the tools but instead spending time to provide quality materials and guidance. 

    Privacy and Security

    Microsoft Teams’ built-in features are not sufficient for teaching purposes, and instructors like to use third-party tools and services using connectors. These connectors can be used for polling, marketing tools, chatbots, or event Moodle. When using the connectors in Microsoft Teams, the connectors will pull the necessary data and access the directory, files, etc. based on what a user subscribes to. But how do all these third-party tools and applications maintain your campus users’ personal data?

    A few months back, a request was made to connect Microsoft Teams to Moodle and a Microsoft partner that provides the integration through API. I checked the integration instructions and the amount of data passing through the partner to be sent to Microsoft Teams. Shockingly almost every Moodle activity was accessible by the plugin, and the partner was receiving a copy of the action log. How is that data being maintained? Will Microsoft take responsibility for the data breach by the partners? Obviously, No!

    It’s rather concerning that users connecting to third-party tools may have their data stolen through the organization with little or no knowledge by the account admin in what could potentially be a massive data breach.

    Student Information System Integration

    Connecting the institution to Moodle – or any other LMS – and building a synchronization is tedious work. But once you get it done, you can send the data two ways from Moodle to SIS (student information system) and vice versa. Microsoft helps schools with SDS (student data sync) tool to create group chats in Microsoft Teams. Users can also connect their Moodle account to Microsoft Teams by authenticating through AAD (Azure Active Directory).

    Any software and web application require the base data to function properly. Still, the mentioned scenario should be considered because there is too much data leaving the information system without any return or contribution back to the system.

    Configurability and Customizability

    I heard from some educators that they believe Microsoft Teams should be a student hub as it is “comprehensive” and meets classroom needs. Let’s not forget that Microsoft Teams is a closed-source as-is solution, and there is no customization available. Any customization can only be delivered to users through connectors, third-party apps, and API integrations. The universities and colleges have complex processes and requirements that software like Moodle can meet by allowing users to customize the platform. The limitations are Microsoft Teams forces the institutions to abandon some of the business processes that were in-place with many years of analyzing what’s best for students. The university shouldn’t change strategies to meet the software limitations, but it should be customized to meet the business requirements.

    Conclusion

    Microsoft Teams is a proper and suitable collaboration tool not just for education but other industries too. But it is important to remember that the nature of the software is not learning management. The educators and institutions can use Microsoft Teams for the delivery of online classrooms using their online conferencing features without the expectations of anything more than streaming of video content.

    No matter which delivery platform is in use, the content’s quality is primary to the learner. Imposing a new system for purposes outside of a chatting and conferencing tool will waste time and energy that could have been spent on content quality improvement. Institutions can inspire students by focusing on the content instead of the tool. Moodle is a simple integrated platform to use for both learner and trainer.

    Microsoft Teams should be used as a gateway for communication and manage the learning through a full-fledged LMS.

    My opinions are mine and do not reflect my employer’s.

  • Malaysian Health Data Warehouse at the Minister’s Fingertips!

    Malaysian Health Data Warehouse at the Minister’s Fingertips!

    Minister of Health accessing Malaysian health data warehouse using Amazon Alexa to query KPI of hospitals at SNOMED International Kuala Lumpur.

    This was a PoC Project done in collaboration between APU and MoH to bring the health warehouse to the Alexa assistant. The Alexa back-end was done by my good friend and colleague Mustafa (Bofa).

    #awsps

    #awspublicsector

    #apu

    #awscloud

    #malaysia

    #medical

    #healthcare

    #research

    #digitalhealth


    Amazon Web Services (AWS)
    Amazon Alexa Developers
    Asia Pacific University of Technology and Innovation (APU / APIIT)APIIT