Saturday, June 8, 2024

IBM Bluemix

 IBM Bluemix overview Bluemix is the IBM open cloud platform that offers mobile and web developers access to IBM software for integration, security, transactions, and other key functions, as well as software from business partners. 

Built on Cloud Foundry open source technology, Bluemix makes application development easier with Platform as a Service (PaaS). Bluemix also provides prebuilt Mobile Backend as a Service (MBaaS) capabilities. The goal is to simplify the delivery of an application by providing services that are ready for immediate use and hosting capabilities to enable internal scale development. 

Bluemix also offers cloud deployments that fit your needs. Whether you are a small business that plans to scale or a large enterprise that requires additional isolation, you can develop in a cloud without borders, where you can connect your dedicated services to the public Bluemix services available from IBM and third-party providers. All service instances are managed by IBM. 

You will get one bill for only what you choose to use. With the broad set of services and runtimes in Bluemix, the developer gains control and flexibility, and has access to various data options, from predictive analytics to big data. Bluemix provides the following features: 

 A range of services that enable you to build and extend web and mobile apps fast 

 Processing power for you to deliver app changes continuously  Fit-for-purpose programming models and services 

 Manageability of services and applications 

 Optimized and elastic workloads 

 Continuous availability Bluemix abstracts and hides most of the complexities that are associated with hosting and managing cloud-based applications. 

As an application developer, you can focus on developing your application without having to manage the infrastructure that is required to host it. For both mobile and web apps, you can use the prebuilt services that are provided by Bluemix. You can upload your web app to Bluemix and indicate how many instances that you want running. After your apps are deployed, you can easily scale them up or down when the use or load of the apps change.

Please go through with IBM Bluemix in depth here.

Google Cloud Platform (GCP)

 Google’s network is the largest network of its kind, and Google has invested billions of dollars over the years to build it.

Google’s global network is designed to give customers the highest possible throughput and lowest possible latencies for their applications by leveraging more than 100 content caching nodes worldwide–which are locations where high demand content is cached for quicker access–to respond to user requests from the location that will provide the quickest response time. 

Google Cloud’s locations underpin all of the important work we do for our customers. From redundant cloud regions to high-bandwidth connectivity via subsea cables, every aspect of our infrastructure is designed to deliver your services to your users, no matter where they are around the world.


Each of these locations are divided into a number of different regions and zones.

Regions represent independent geographic areas, and are composed of zones. For example, London, or europe-west2, is a region that currently comprises three different zones.

A zone is an area where Google Cloud resources get deployed. For example, let’s say you launch a virtual machine using Compute Engine–more about Compute Engine in a bit–it will run in the zone that you specify to ensure resource redundancy.

You can run resources in different regions. This is useful for bringing applications closer to users around the world, and also for protection in case there are issues with an entire region, say, due to a natural disaster.

Some of Google Cloud’s services support placing resources in what we call a multi-region. For example, Spanner multi-region configurations allow you to replicate the database's data not just in multiple zones, but in multiple zones across multiple regions, as defined by the instance configuration. These additional replicas enable you to read data with low latency from multiple locations close to or within the regions in the configuration, like The Netherlands and Belgium.

Google Cloud currently supports 121 zones in 40 regions, although this number is increasing all the time. The most up to date info can be found at cloud.google.com/about/locations

At the Hardware infrastructure level :

Hardware design and provenance: Both the server boards and the networking equipment in Google data centers are custom designed by Google. Google also designs custom chips, including a hardware security chip that's currently being deployed on both servers and peripherals.

Secure boot stack: Google server machines use a variety of technologies to ensure that they are booting the correct software stack, such as cryptographic signatures over the BIOS, bootloader, kernel, and base operating system image.

Premises security: Google designs and builds its own data centers, which incorporate multiple layers of physical security protections. Access to these data centers is limited to only a very small fraction of Google employees. Google additionally hosts some servers in third-party data centers, where we ensure that there are Google-controlled physical security measures on top of the security layers provided by the data center operator.

At the Service deployment level :

Encryption of inter-service communication: Google’s infrastructure provides cryptographic privacy and integrity for remote procedure call (“RPC”) data on the network. Google’s services communicate with each other using RPC calls. The infrastructure automatically encrypts all infrastructure RPC traffic which goes between data centers. Google has started to deploy hardware cryptographic accelerators that will allow it to extend this default encryption to all infrastructure RPC traffic inside Google data centers.

At the User identity level:

User identity: Google’s central identity service, which usually manifests to end users as the Google login page, goes beyond asking for a simple username and password. The service also intelligently challenges users for additional information based on risk factors such as whether they have logged in from the same device or a similar location in the past. Users also have the option of employing secondary factors when signing in, including devices based on the Universal 2nd Factor (U2F) open standard.

At the Storage services level:

Encryption at rest: Most applications at Google access physical storage (in other words, “file storage”) indirectly via storage services, and encryption (using centrally managed keys) is applied at the layer of these storage services. Google also enables hardware encryption support in hard drives and SSDs.

At the Internet communication level:

Google Front End (“GFE”): Google services that want to make themselves available on the Internet register themselves with an infrastructure service called the Google Front End, which ensures that all TLS connections are ended using a public-private key pair and an X.509 certificate from a Certified Authority (CA) as well as following best practices such as supporting perfect forward secrecy. The GFE additionally applies protections against Denial of Service attacks.

Denial of Service (“DoS”) protection: The sheer scale of its infrastructure enables Google to simply absorb many DoS attacks. Google also has multi-tier, multi-layer DoS protections that further reduce the risk of any DoS impact on a service running behind a GFE.

At Google’s Operational security level :

Intrusion detection: Rules and machine intelligence give Google’s operational security teams warnings of possible incidents. Google conducts Red Team exercises to measure and improve the effectiveness of its detection and response mechanisms.

Reducing insider risk: Google aggressively limits and actively monitors the activities of employees who have been granted administrative access to the infrastructure.

Employee U2F use: To guard against phishing attacks against Google employees, employee accounts require use of U2F-compatible Security Keys.

Software development practices: Google employs central source control and requires two-party review of new code. Google also provides its developers libraries that prevent them from introducing certain classes of security bugs. Google also runs a Vulnerability Rewards Program where we pay anyone who is able to discover and inform us of bugs in our infrastructure or applications .

Google provides interoperability at multiple layers of the stack. Kubernetes and Google Kubernetes Engine give customers the ability to mix and match microservices running across different clouds. Google Cloud Observability lets customers monitor workloads across multiple cloud providers.

online pricing calculator can help estimate your costs. Visit cloud.google.com/products/calculator to try it out .

we can deep dive here about many areas :








IBM Bluemix

GCP - Applications in the cloud

 




The Cloud Run developer workflow is a straightforward three-step process. 
 ● First, you write your application using your favorite programming language. This application should start a server that listens for web requests. 
 ● Second, you build and package your application into a container image. 
 ● Third, the container image is pushed to Artifact Registry, where Cloud Run will deploy it. 

Once you’ve deployed your container image, you’ll get a unique HTTPS URL back. Cloud Run then starts your container on demand to handle requests, and ensures that all incoming requests are handled by dynamically adding and removing containers. Cloud Run is serverless, which means that you, as a developer, can focus on building your application and not on building and maintaining the infrastructure that powers it.






GCP - Containers

 We've already explore Compute Engine–which is Google Cloud's infrastructure as a service offering, with access to servers, file systems, and networking–and App Engine, which is Google Cloud's platform as a service offering.

What are containers?

A container is an invisible box around your code and its dependencies with limited access to its own partition of the file system and hardware. It only requires a few system calls to create and it starts as quickly as a process. All that’s needed on each host is an OS kernel that supports containers and a container runtime. 

In essence, the OS and dependencies are being virtualized. A container gives you the best of two worlds - it scales like Platform as a Service (PaaS) but gives you nearly the same flexibility as Infrastructure as a Service (IaaS.) This makes your code ultra portable, and the OS and hardware can be treated as a black box. So you can go from development, to staging, to production, or from your laptop to the cloud, without changing or rebuilding anything. 

In short, containers are portable, loosely coupled boxes of application code and dependencies that allow you to “code once, and run anywhere.” 




You can scale by duplicating single containers.

As an example, let’s say you want to launch and then scale a web server. With a container, you can do this in seconds and deploy dozens or hundreds of them, depending on the size or your workload, on a single host. This is because containers can be easily “scaled” to meet demand. 

This is just a simple example of scaling one container which is running your whole application on a single host. 






A product that helps manage and scale containerized applications is Kubernetes. So to save time and effort when scaling applications and workloads, Kubernetes can be bootstrapped using Google Kubernetes Engine or GKE.

So, what is Kubernetes? Kubernetes is an open-source platform for managing containerized workloads and services. It makes it easy to orchestrate many containers on many hosts, scale them as microservices, and easily deploy rollouts and rollbacks.

At the highest level, Kubernetes is a set of APIs that you can use to deploy containers on a set of nodes called a cluster.

The system is divided into a set of primary components that run as the control plane and a set of nodes that run containers. In Kubernetes, a node represents a computing instance, like a machine. Note that this is different to a node on Google Cloud which is a virtual machine running in Compute Engine.

You can describe a set of applications and how they should interact with each other, and Kubernetes determines how to make that happen.


 
Deploying containers on nodes by using a wrapper around one or more containers is what defines a Pod. A Pod is the smallest unit in Kubernetes that you create or deploy. It represents a running process on your cluster as either a component of your application or an entire app.

Generally, you only have one container per pod, but if you have multiple containers with a hard dependency, you can package them into a single pod and share networking and storage resources between them. The Pod provides a unique network IP and set of ports for your containers and configurable options that govern how your containers should run.

One way to run a container in a Pod in Kubernetes is to use the kubectl run command, which starts a Deployment with a container running inside a Pod.

A Deployment represents a group of replicas of the same Pod and keeps your Pods running even when the nodes they run on fail. A Deployment could represent a component of an application or even an entire app.

To see a list of the running Pods in your project, run the command: $ kubectl get pods 



 
Kubernetes creates a Service with a fixed IP address for your Pods, and a controller says "I need to attach an external load balancer with a public IP address to that Service so others outside the cluster can access it".

In GKE, the load balancer is created as a network load balancer.




Any client that reaches that IP address will be routed to a Pod behind the Service. A Service is an abstraction which defines a logical set of Pods and a policy by which to access them. 

As Deployments create and destroy Pods, Pods will be assigned their own IP addresses, but those addresses don't remain stable over time. 
A Service group is a set of Pods and provides a stable endpoint (or fixed IP address) for them. 

For example, if you create two sets of Pods called frontend and backend and put them behind their own Services, the backend Pods might change, but frontend Pods are not aware of this. They simply refer to the backend Service.




You can still reach your endpoint as before by using kubectl get services to get the external IP of the Service and reach the public IP address from a client.





Benefits of running GKE clusters :

 Running a GKE cluster comes with the benefit of advanced cluster management features that Google Cloud provides. These include: 
● Google Cloud's load-balancing for Compute Engine instances. 
● Node pools to designate subsets of nodes within a cluster for additional flexibility. 
● Automatic scaling of your cluster's node instance count. 
● Automatic upgrades for your cluster's node software. 
● Node auto-repair to maintain node health and availability. 
● Logging and monitoring with Google Cloud Observability for visibility into your cluster


To start up Kubernetes on a cluster in GKE, all you do is run this command: 
$> gcloud container clusters create k1 


GCP - Storage

 Every application needs to store data - for example, media to be streamed or perhaps even sensor data from devices - and different applications and workloads require different storage database solutions.

Google Cloud has storage options for structured, unstructured, transactional, and relational data.

Google Cloud has storage options for structured, unstructured, transactional, and relational data

Google Cloud’s five core storage products: 

Cloud Storage, 

Bigtable, 

Cloud SQL, 

Spanner, 

Firestore. 

Depending on your application, you might use one or several of these services to do the job.


Cloud Storage’s primary use is whenever binary large-object storage (also known as a “BLOB”) is needed for online content such as videos and photos, for backup and archived data, and for storage of intermediate results in processing workflows.

Cloud Storage files are organized into buckets. A bucket needs a globally-unique name and a specific geographic location for where it should be stored, and an ideal location for a bucket is where latency is minimized. For example, if most of your users are in Europe, you probably want to pick a European location so either a specific Google Cloud region in Europe, or else the EU multi-region.

The storage objects offered by Cloud Storage are “immutable,” which means that you do not edit them, but instead a new version is created with every change made.

Administrators have the option to either allow each new version to completely overwrite the older one, or to keep track of each change made to a particular object by enabling “versioning” within a bucket. If you choose to use versioning, Cloud Storage will keep a detailed history of modifications -- that is, overwrites or deletes -- of all objects contained in that bucket.   


In many cases, personally identifiable information may be contained in data objects, so controlling access to stored data is essential to ensuring security and privacy are maintained. Using IAM roles and, where needed, access control lists (ACLs), companies can conform to security best practices, which require each user have access and permissions to only what they need to do their jobs, but no more than that.

There are a couple of options to control user access to objects and buckets : 
1. For most purposes, IAM is sufficient. Roles are inherited from project to bucket to object. 

2. If you need finer control, you can create access control lists. Each access control list consists of two pieces of information. The first is a scope, which defines who can access and perform an action. This can be a specific user or group of users. The second is a permission, which defines what actions can be performed, like read or write.







There are four primary storage classes in Cloud Storage and stored data is managed and billed according to which “class” it belongs:

1. The first is Standard Storage. Standard Storage is considered best for frequently accessed, or “hot,” data. It’s also great for data that is stored for only brief periods of time. 

2. The second storage class is Nearline Storage. This is best for storing infrequently accessed data, like reading or modifying data once per month or less, on average. Examples might include data backups, long-tail multimedia content, or data archiving. 

3. The third storage class is Coldline Storage. This is also a low-cost option for storing infrequently accessed data. However, as compared to Nearline Storage, Coldline Storage is meant for reading or modifying data, at most, once every 90 days. 

4. The fourth storage class is Archive Storage. This is the lowest-cost option, used ideally for data archiving, online backup, and disaster recovery. It’s the best choice for data that you plan to access less than once a year, because it has higher costs for data access and operations and a 365-day minimum storage duration.




Cloud Storage also provides a feature called Autoclass, which automatically transitions objects to appropriate storage classes based on each object's access pattern.
The feature moves data that is not accessed to colder storage classes to reduce storage cost and moves data that is accessed to Standard storage to optimize future accesses.
Autoclass simplifies and automates cost saving for your Cloud Storage data. 







 








GCP - Virtual Machines and Networks

 A virtual private cloud, or VPC, is a secure, individual, private cloud-computing model hosted within a public cloud – like Google Cloud!

On a VPC, customers can run code, store data, host websites, and do anything else they could do in an ordinary private cloud, but this private cloud is hosted remotely by a public cloud provider. This means that VPCs combine the scalability and convenience of public cloud computing with the data isolation of private cloud computing.

VPC networks connect Google Cloud resources to each other and to the internet. This includes segmenting networks, using firewall rules to restrict access to instances, and creating static routes to forward traffic to specific destinations. 

Here's something that tends to surprise a lot of new Google Cloud users: Google VPC networks are global. They can also have subnets, which is a segmented piece of the larger network, in any Google Cloud region worldwide. Subnets can span the zones that make up a region. This architecture makes it easy to define network layouts with global scope. Resources can even be in different zones on the same subnet.

 


The size of a subnet can be increased by expanding the range of IP addresses allocated to it. And doing so won’t affect already configured virtual machines.

For example, let’s take a VPC network named vpc1 that has two subnets defined in the asia-east1 and us-east1 regions. If the VPC has three Compute Engine VMs attached to it, it means they’re neighbors on the same subnet even though they are in different zones! This capability can be used to build solutions that are resilient to disruptions, yet retain a simple network layout.

With Compute Engine, users can create and run virtual machines on Google infrastructure. There are no upfront investments, and thousands of virtual CPUs can run on a system that is designed to be fast and offer consistent performance.

 Each virtual machine contains the power and functionality of a full-fledged operating system. This means a virtual machine can be configured much like a physical server; by specifying the amount of CPU power and memory needed, the amount and type of storage needed, and the operating system.

You can create a virtual machine instance or create a group of managed instances by using the Google Cloud console, which is a web-based tool to manage Google Cloud projects and resources, the Google Cloud CLI, or the Compute Engine API.

The instance can run Linux and Windows Server images provided by Google, or any customized versions of these images. You can also build and run images of other operating systems and flexibly reconfigure virtual machines.

A quick way to get started with Google Cloud is through the Cloud Marketplace, which offers solutions from both Google and third-party vendors. With these solutions, there’s no need to manually configure the software, virtual machine instances, storage, or network settings, although many of them can be modified before launch if that’s required.

Most software packages in Cloud Marketplace are available at no additional charge beyond the normal usage fees for Google Cloud resources. Some Cloud Marketplace images charge usage fees, particularly those published by third parties, with commercially licensed software, but they all show estimates of their monthly charges before they’re launched.


 At this point, you might be wondering about pricing and billing related to Compute Engine.

For the use of virtual machines, Compute Engine bills by the second with a one-minute minimum, and sustained-use discounts start to apply automatically to virtual machines the longer they run. So, for each VM that runs for more than 25% of a month, Compute Engine automatically applies a discount for every additional minute.

Compute Engine also offers committed-use discounts. This means that for stable and predictable workloads, a specific amount of vCPUs and memory can be purchased for up to a 57% discount off of normal prices in return for committing to a usage term of one year or three years. 

And then there are Preemptible and Spot VMs. Let’s say you have a workload that doesn’t require a human to sit and wait for it to finish–like a batch job analyzing a large dataset, for example. You can save money, in some cases up to 90%, by choosing Preemptible VMs to run the job.

A Preemptible or Spot VM is different from an ordinary Compute Engine VM in only one respect: Compute Engine has permission to terminate a job if its resources are needed elsewhere. While savings can be had with preemptible or spot VMs, you'll need to ensure your job can be stopped and restarted.

Spot VMs differ from Preemptible VMs by offering more features. For example, preemptible VMs can only run for up to 24 hours at a time, but Spot VMs do not have a maximum runtime. However, the pricing is currently the same for both.  

GCP - IAM roles

 When an organization node contains lots of folders, projects, and resources, it’s likely that a workforce might need to restrict who has access to what.

To help with this task, administrators can use Identity and Access Management, or IAM. With IAM, administrators can apply policies that define who can do what on which resources :

● The “who” part of an IAM policy can be a Google account, a Google group, a service account, or Cloud Identity domain. A “who” is also called a “principal.” Each principal has its own identifier, usually an email address. 

● The “can do what” part of an IAM policy is defined by a role. An IAM role is a collection of permissions. When you grant a role to a principal, you grant all the permissions that the role contains. For example, to manage virtual machine instances in a project, you have to be able to create, delete, start, stop and change virtual machines. So these permissions are grouped together into a role to make them easier to understand and easier to manage. 

When a principal (user, group, or service account) is given a role on a specific element of the resource hierarchy, the resulting policy applies to the chosen element, as well as to all of the elements below it in the hierarchy.

You can define deny rules that prevent certain principals from using certain permissions, regardless of the roles they're granted. This is because IAM always checks relevant deny policies before checking relevant allow policies. 

Deny policies, like allow policies, are inherited through the resource hierarchy.


The first role type is basic. Basic roles are quite broad in scope. When applied to a Google Cloud project, they affect all resources in that project. Basic roles include owner, editor, viewer, and billing administrator.

Let’s have a look at these basic roles: 

● Project viewers can examine resources, but can make no changes. 

● Project editors can examine and make changes to a resource. 

● And project owners can also examine and make changes to a resource. In addition, project owners can manage the associate roles and permissions, as well as set up billing. 

● Often companies want someone to be able to control the billing for a project, but not have permissions to change the resources in the project. This is possible through a billing administrator role. 

A word of caution: If several people are working together on a project that contains sensitive data, basic roles are probably too broad. Fortunately, IAM provides other ways to assign permissions that are more specifically tailored to meet the needs of typical job roles.




But what if you need to assign a role that has even more specific permissions? That’s when you’d use a custom role.

A lot of companies use a “least-privilege” model, in which each person in your organization is given the minimal amount of privilege needed to do their job. So, for example, maybe you want to define an “instanceOperator” role to allow some users to stop and start Compute Engine virtual machines, but not reconfigure them. Custom roles allow for that.



Service accounts are identified with email addresses. Service accounts are also managed by IAM.


GCP - Resources and Access in the Cloud



It’s important to understand this resource hierarchy, as it directly relates to how policies are managed and applied when using Google Cloud. Policies can be defined at the project, folder, and organization node levels. Some Google Cloud services allow policies to be applied to individual resources, too. Policies are also inherited downward. This means that if you apply a policy to a folder, it will also apply to all of the projects within that folder




Each Google Cloud project has three identifying attributes: a project ID, a project name, and a project number.

● The project ID is a globally unique identifier assigned by Google that cannot be changed–they are immutable–after creation. Project IDs are used in different contexts to inform Google Cloud of the exact project to work with. 

 ● The project names, however, are user-created. They don’t have to be unique and they can be changed at any time, so they are not immutable. 

● Google Cloud also assigns each project a unique project number. It’s helpful to know that these Google-generated numbers exist, but we won’t explore them much in this course. They are mainly used internally, by Google Cloud, to keep track of resources.



The third level of the Google Cloud resource hierarchy is folders. 

Folders let you assign policies to resources at a level of granularity you choose. The projects and subfolders in a folder contain resources that inherit policies and permissions assigned to that folder. 

 A folder can contain projects, other folders, or a combination of both.




Different Types of Reports in Scrum - Agile

  Agile Reporting 1. Sprint Burndown At a Sprint-level, the burndown presents the  easiest way to track and report status  (the proverbial  ...