Compare commits

...

12 Commits

Author SHA1 Message Date
KostLinux db4918ea42
Merge fd2276232d into 2f7977a95a 2024-09-12 21:53:46 +01:00
Oumaima Fisaoui 2f7977a95a Chore(AI): Fix sp500 subject and audit 2024-09-09 09:58:11 +01:00
Oumaima Fisaoui 1d34ea0a71 Chore(DPxAI): Fix format 2024-09-06 11:18:31 +01:00
Oumaima Fisaoui 75472c0ed6 Chore(DPxAI): Fix format 2024-09-06 11:18:31 +01:00
Oumaima Fisaoui cccab05477 Chore(DPxAI): Fix format 2024-09-06 11:18:31 +01:00
Oumaima Fisaoui aa54ab1e66 Chore(DPxAI): Fix format 2024-09-06 11:18:31 +01:00
Oumaima Fisaoui 62486ed720 Chore(DPxAI): Fix the accuracy on test set 2024-09-06 11:18:31 +01:00
Oumaima Fisaoui 659074232f Chore(AI): Fix emotions detector 2024-09-06 11:18:31 +01:00
oumaimafisaoui 9c9adb1c88 Fix(Pipeline): Fix irradiat attribute values 2024-09-05 14:49:00 +01:00
oumaimafisaoui 00813d29e9 Fix(Pipeline): fix formatting 2024-09-05 14:49:00 +01:00
oumaimafisaoui fe5f82edcf Fix(Pipeline): fix datafile data info and example do not match 2024-09-05 14:49:00 +01:00
KostLinux fd2276232d
feat: added test task and serverless task 2024-08-31 13:25:06 +03:00
16 changed files with 369 additions and 54 deletions

View File

@ -0,0 +1,5 @@
# Test tasks from the real job interviews
This folder is used to store test tasks from the real job interviews. The tasks are stored in the `test-tasks` folder. Each task is stored in a separate folder with the name of the company that provided the task. Inside the company folder, there is a `README.md` file with the task description.
The real job interviews prepare students and junior specialists for the real job interviews. The tasks are designed to test the knowledge and skills of the candidates. The tasks are usually taken from the job position interviews and the company's technology stack.

View File

@ -0,0 +1,29 @@
# Alpeso
# Alpeso Recruiting - SENIOR DEVOPS ENGINEER
This is a test project within Alpeso's technical recruiting process.
Look at the following tasks and estimate how much time you will spend on them.
In doubt of technical issues you can send an e-mail with your questions.
## Preconditions
### Technical & Knowledge
You need at least:
* Experience with AWS stack
* Experience with CI/CD
* Experience with Bash scripts
* Experience in at least one programming language (Java, Python, PHP, Perl, etc.)
* A text editor of your choice
## The tasks
1) We have a Terraform securitygroups.tf file. Every time Terraform runs, it says the security group in that file will be updated in place. Find a way to prevent this.
2) You have the alpeso-test.tar.gz archive. What we can improve?
3) Provide infrastructure and create CI/CD with a web app that will listen to 8089 port and return "ReallyNotBad" string when POST request contains header "NotBad" with value "true", eg. `curl -X POST -H "NotBad: true" https://someurl:8089/` should return "ReallyNotBad".
Use any technology you want to deploy the application to AWS. It can be Ansible, Terraform, etc. or a combination of some of them.
Hint: https://aws.amazon.com/free/

Binary file not shown.

View File

@ -0,0 +1,39 @@
# Luminor Test Task
## Part 1 The Web Service
Write a web service in any language that takes in a JSON payload, does
some basic validation against an expected message format and content,
and then puts that payload into a queue of your choice or a file.
Example valid payload:
```json
{
"ts": "1530228282",
"sender": "curler-user",
"message": {
"foo": "bar",
"hash": "bash"
},
"sent-from-ip": "1.2.3.4"
}
```
Validation rules:
● “ts” must be present and a valid Unix timestamp
● “sender” must be present and a string
● “message” must be present, a JSON object, and have at least one
field set
● If present, “sent-from-ip” must be a valid IPv4 address
● All fields not listed in the example above are invalid, and
should result in the message being rejected.
## Part 2 Terraform
Deploy this application to your favourite cloud provider using Terraform.
## Part 3 NewRelic
Implement NewRelic monitoring for this application using Terraform.
Please send finished code, testing and results through mail.

View File

@ -0,0 +1,70 @@
# Entain
Entain is one of the biggest companies, which hosts casinos like Olybet, Optibet etc. They are looking for a software engineer to join their team.
## Task
### Tools and technologies used:
1. Go
2. PostgreSQL
3. Docker
4. Makefile
5. Postman
All database tables are in `migrations` folder. To run them, use `make` command.
1. `make migrate-up` - to run up migrations
2. `make migrate-down` - to run down migrations
3. `make migrate-force` - to force run migrations if you have some errors like `error: Dirty database version -1. Fix and force version.`
### Tables:
1. `users` - contains users data
2. `transaction` - contains transactions data
### Endpoints to test:
1. `GET /users` - to get all users
2. `GET /users/{user_id}` - to get user by id, check his balance
3. `GET /transactions/{user_id}` - to get all transactions by user id (check if user has any transactions)
4. `POST /process-record/{user_id}` - to process record by user id
Process record request body example:
```
{
"amount": 10,
"transaction_id": "64338a05-81e5-426b-b01e-927e447c9e33",
"state": "win"
}
```
Transaction id is unique, so you can't process the same transaction twice, provide UUID v4 format.
State can be `win` or `lose`.
Amount is a number should be positive but to have a negative balance you should provide a `lose` state.
### Required header for all endpoints:
1. `Source-Type: game` - available values: `game`, `server`, `payment`
Postman collection is in `postman` folder to test endpoints.
## To run the app locally:
1. Create `.env` file in root folder and add all required variables from `.env.example` file
2. To run migrations you should have migrate tool installed. You can install it with `brew install golang-migrate` (https://github.com/golang-migrate/migrate/tree/master/cmd/migrate)
3. To run any `make` command you should have `make` tool installed. You can install it with `sudo apt install make` command (https://linuxhint.com/install-make-ubuntu/)
4. Run `make migrate-up` command to run migrations and create all tables with test user (user_id: `63e83104-b9a7-4fec-929e-9d08cae3f9b9`)
5. Run `make run` command to run application
6. Take a look at `postman` folder to take collection for testing all endpoints
Test user with id `63e83104-b9a7-4fec-929e-9d08cae3f9b9` will be created automatically when you run migrations.
This user has 50 amount of his balance for testing.
## To run application in docker container:
1. Create `.env` file in root folder and add all required variables from `.env.example` file
2. To run docker container you should have `docker` and `docker-compose` tools installed (Tested on `Docker version 26.1.3, build b72abbb` and `Docker Compose version v2.27.1`)
3. `docker-compose up` - to run application in docker container
4. `docker-compose down` - to stop application in docker container

View File

@ -1,18 +1,22 @@
## Emotions detection with Deep Learning
Cameras are everywhere. Videos and images have become one of the most interesting data sets for artificial intelligence.
Image processing is a quite broad research area, not just filtering, compression, and enhancement. Besides, we are even interested in the question, “what is in images?”, i.e., content analysis of visual inputs, which is part of the main task of computer vision.
The study of computer vision could make possible such tasks as 3D reconstruction of scenes, motion capturing, and object recognition, which are crucial for even higher-level intelligence such as image and video understanding, and motion understanding.
For this 2 months project we will
focus on two tasks:
Image processing is a quite broad research area, not just filtering, compression, and enhancement.
- emotion classification
- face tracking
Besides, we are even interested in the question, “what is in images?”, i.e., content analysis of visual inputs, which is part of the main task of computer vision.
The study of computer vision could make possible such tasks as 3D reconstruction of scenes, motion capturing, and object recognition, which are crucial for even higher-level intelligence such as image and video understanding, and motion understanding.
For this project we will focus on two tasks:
- Emotion classification
- Face tracking
With the computing power exponentially increasing the computer vision field has been developing exponentially. This is a key element because the computer power allows using more easily a type of neural networks very powerful on images:
CNN's (Convolutional Neural Networks). Before the CNNs were democratized, the algorithms used relied a lot on human analysis to extract features which obviously time-consuming and not reliable. If you're interested in the "old
school methodology" [this article](https://towardsdatascience.com/classifying-facial-emotions-via-machine-learning-5aac111932d3) explains it.
The history behind this field is fascinating! [Here](https://kapernikov.com/basic-introduction-to-computer-vision/) is a short summary of its history.
- CNN's (Convolutional Neural Networks). Before the CNNs were democratized, the algorithms used relied a lot on human analysis to extract features which obviously time-consuming and not reliable. If you're interested in the "old school methodology" [this article](https://towardsdatascience.com/classifying-facial-emotions-via-machine-learning-5aac111932d3) explains it.
- The history behind this field is fascinating! [Here](https://kapernikov.com/basic-introduction-to-computer-vision/) is a short summary of its history.
### Project goal and suggested timeline
@ -31,15 +35,18 @@ The two steps are detailed below.
### Preliminary:
- Take [this course](https://www.coursera.org/learn/convolutional-neural-networks). This course is a reference for many reasons and one of them is the creator: **Andrew Ng**. He explains the basics of CNNs but also some more advanced topics as transfer learning, siamese networks etc ...
I suggest to focus on Week 1 and 2 and to spend less time on Week 3 and 4. Don't worry the time scoping of such MOOCs are conservative. You can attend the lessons for free!
- I suggest to focus on Week 1 and 2 and to spend less time on Week 3 and 4. Don't worry the time scoping of such MOOCs are conservative. You can attend the lessons for free!
- Participate in [this challenge](https://www.kaggle.com/c/digit-recognizer/code). The MNIST dataset is a reference in computer vision. Researchers use it as a benchmark to compare their models.
Start first with a logistic regression to understand how to handle images in Python. And then train your first CNN on this data set.
- Start first with a logistic regression to understand how to handle images in Python. And then train your first CNN on this data set.
### Face emotions classification
Emotion detection is one of the most researched topics in the modern-day machine learning arena. The ability to accurately detect and identify an emotion opens up numerous doors for Advanced Human Computer Interaction.
The aim of this project is to detect up to seven distinct facial emotions in real time. This project runs on top of a Convolutional Neural Network (CNN) that is built with the help of Keras whose backend is TensorFlow in Python.
The aim of this project is to detect up to seven distinct facial emotions in real time.
This project runs on top of a Convolutional Neural Network (CNN) that is built with the help of Keras whose backend is TensorFlow in Python.
The facial emotions that can be detected and classified by this system are Happy, Sad, Angry, Surprise, Fear, Disgust and Neutral.
Your goal is to implement a program that takes as input a video stream that contains a person's face and that predicts the emotion of the person.
@ -49,10 +56,10 @@ Your goal is to implement a program that takes as input a video stream that cont
- Download and unzip the [data here](https://assets.01-edu.org/ai-branch/project3/emotions-detector.zip).
This dataset was provided for this past [Kaggle challenge](https://www.kaggle.com/competitions/challenges-in-representation-learning-facial-expression-recognition-challenge/overview).
It is possible to find more information about on the challenge page. Train a CNN on the dataset `train.csv`. Here is an [example of architecture](https://www.quora.com/What-is-the-VGG-neural-network) you can implement.
**The CNN has to perform more than 70% on the test set**. You can use the `test_with_emotions.csv` file for this. You will see that the CNNs take a lot of time to train.
**The CNN has to perform more than 60% on the test set**. You can use the `test_with_emotions.csv` file for this. You will see that the CNNs take a lot of time to train.
You don't want to overfit the neural network. I strongly suggest to use early stopping, callbacks and to monitor the training using the `TensorBoard`.
You have to save the trained model in `my_own_model.pkl` and to explain the chosen architecture in `my_own_model_architecture.txt`. Use `model.summary())` to print the architecture.
You have to save the trained model in `final_emotion_model.keras` and to explain the chosen architecture in `final_emotion_model_arch.txt`. Use `model.summary())` to print the architecture.
It is also expected that you explain the iterations and how you end up choosing your final architecture. Save a screenshot of the `TensorBoard` while the model's training in `tensorboard.png` and save a plot with the learning curves showing the model training and stopping BEFORE the model starts overfitting in `learning_curves.png`.
- Optional: Use a pre-trained CNN to improve the accuracy. You will find some huge CNN's architecture that perform well. The issue is that it is expensive to train them from scratch.
@ -86,13 +93,10 @@ project
├── environment.yml
├── README.md
├── results
│   ├── hack_cnn
│   │   ├── hacked_image.png
│   │   └── input_image.png
│   ├── model
│   │   ├── learning_curves.png
│   │   ├── my_own_model_architecture.txt
│   │   ├── my_own_model.pkl
│   │   ├── final_emotion_model_arch.txt
│   │   ├── final_emotion_model.keras
│   │   ├── pre_trained_model_architecture.txt
│   │   └── pre_trained_model.pkl
│   └── preprocessing_test
@ -101,7 +105,7 @@ project
│   ├── image_n.png
│   └── input_video.mp4
└── scripts
├── hack_the_cnn.py
|__ validation_loss_accuracy.py
├── predict_live_stream.py
├── predict.py
├── preprocess.py
@ -114,7 +118,7 @@ project
```prompt
python ./scripts/predict.py
Accuracy on test set: 72%
Accuracy on test set: 62%
```

View File

@ -24,12 +24,12 @@
###### Does the text document explain why the architecture was chosen, and what were the previous iterations?
###### Does the following command `python ./scripts/predict.py` run without any error and returns an accuracy greater than 70%?
###### Does the following command `python ./scripts/predict.py` run without any error and returns an accuracy greater than 60%?
```prompt
python ./scripts/predict.py
Accuracy on test set: 72%
Accuracy on test set: 62%
```

View File

@ -241,8 +241,10 @@ breast: One Hot
breast-quad: One Hot
['right_low' 'left_low' 'left_up' 'central' 'right_up']
irradiat: One Hot
['yes' 'no']
Class: Target (One Hot)
['recurrence-events' 'no-recurrence-events']
```
@ -259,16 +261,16 @@ input: ohe.transform(X_test[ohe_cols])[:10]
output:
array([[1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0.],
[1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0.],
[0., 1., 1., 0., 0., 1., 0., 0., 0., 0., 1.],
[0., 1., 1., 0., 0., 1., 0., 0., 0., 0., 1.],
[1., 0., 1., 0., 0., 0., 1., 0., 0., 1., 0.],
[1., 0., 1., 0., 0., 0., 0., 1., 0., 1., 0.],
[1., 0., 0., 1., 0., 0., 0., 0., 1., 1., 0.],
[1., 0., 0., 1., 0., 1., 0., 0., 0., 1., 0.],
[1., 0., 0., 1., 0., 0., 0., 1., 0., 1., 0.],
[1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0.],
[1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0.],
[1., 0., 0., 1., 0., 1., 0., 0., 0., 1., 0.],
[1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0.],
[0., 1., 1., 0., 0., 0., 1., 0., 0., 0., 1.]])
[1., 0., 1., 0., 0., 0., 0., 1., 0., 0., 1.],
[1., 0., 0., 1., 0., 1., 0., 0., 0., 1., 0.]])
input: ohe.get_feature_names(ohe_cols)
input: ohe.get_feature_names_out(ohe_cols)
output:
array(['node-caps_no', 'node-caps_yes', 'breast_left', 'breast_right',
'breast-quad_central', 'breast-quad_left_low',

View File

@ -146,14 +146,14 @@ dtype: int64
array([[1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0.],
[1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0.],
[0., 1., 1., 0., 0., 1., 0., 0., 0., 0., 1.],
[0., 1., 1., 0., 0., 1., 0., 0., 0., 0., 1.],
[1., 0., 1., 0., 0., 0., 1., 0., 0., 1., 0.],
[1., 0., 1., 0., 0., 0., 0., 1., 0., 1., 0.],
[1., 0., 0., 1., 0., 0., 0., 0., 1., 1., 0.],
[1., 0., 0., 1., 0., 1., 0., 0., 0., 1., 0.],
[1., 0., 0., 1., 0., 0., 0., 1., 0., 1., 0.],
[1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0.],
[1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0.],
[1., 0., 0., 1., 0., 1., 0., 0., 0., 1., 0.],
[1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0.],
[0., 1., 1., 0., 0., 0., 1., 0., 0., 0., 1.]])
[1., 0., 1., 0., 0., 0., 0., 1., 0., 0., 1.],
[1., 0., 0., 1., 0., 1., 0., 0., 0., 1., 0.]])
```
@ -162,16 +162,16 @@ array([[1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0.],
```console
#First 10 rows:
array([[1., 2., 5., 0., 1.],
[1., 3., 4., 0., 1.],
[1., 2., 4., 0., 1.],
[1., 3., 2., 0., 1.],
[1., 4., 3., 0., 1.],
[1., 4., 5., 0., 0.],
[2., 5., 4., 0., 1.],
[2., 5., 8., 0., 1.],
[0., 2., 3., 0., 2.],
[1., 3., 6., 4., 2.]])
array([[2., 5., 2., 0., 1.],
[2., 5., 2., 0., 0.],
[2., 5., 4., 5., 2.],
[1., 4., 5., 1., 1.],
[2., 5., 5., 0., 2.],
[1., 2., 1., 0., 1.],
[1., 2., 8., 0., 1.],
[2., 5., 2., 0., 0.],
[2., 5., 5., 0., 2.],
[1., 2., 3., 0., 0.]])
```
@ -180,8 +180,8 @@ array([[1., 2., 5., 0., 1.],
```console
# First 2 rows:
array([[1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 1., 2., 5., 0., 1.],
[1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 1., 3., 4., 0., 1.]])
array([[1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 2., 5., 2., 0., 1.],
[1., 0., 1., 0., 0., 1., 0., 0., 0., 1., 0., 2., 5., 2., 0., 0.]])
```
---

View File

@ -1,10 +1,12 @@
## Financial strategies on the SP500
In this project we will apply machine to finance. You are a Quant/Data Scientist and your goal is to create a financial strategy based on a signal outputted by a machine learning model that over-performs the [SP500](https://en.wikipedia.org/wiki/S%26P_500).
In this project, you'll apply machine learning to finance. Your goal as a Quant/Data Scientist is to create a financial strategy that uses a signal generated by a machine learning model to outperform the [SP500](https://en.wikipedia.org/wiki/S%26P_500).
The Standard & Poor's 500 Index is a collection of stocks intended to reflect the overall return characteristics of the stock market as a whole. The stocks that make up the S&P 500 are selected by market capitalization, liquidity, and industry. Companies to be included in the S&P are selected by the S&P 500 Index Committee, which consists of a group of analysts employed by Standard & Poor's.
The S&P 500 Index originally began in 1926 as the "composite index" comprised of only 90 stocks. According to historical records, the average annual return since its inception in 1926 through 2018 is approximately 10%11%. The average annual return since adopting 500 stocks into the index in 1957 through 2018 is roughly 8%.
As a Quant Researcher, you may beat the SP500 one year or few years. The real challenge though is to beat the SP500 consistently over decades. That's what most hedge funds in the world are trying to do.
The S&P 500 Index is a collection of 500 stocks that represent the overall performance of the U.S. stock market. The stocks in the S&P 500 are chosen based on factors like market value, liquidity, and industry. These selections are made by the S&P 500 Index Committee, which is a group of analysts from Standard & Poor's.
The S&P 500 started in 1926 with only 90 stocks and has grown to include 500 stocks since 1957. Historically, the average annual return of the S&P 500 has been about 10-11% since 1926, and around 8% since 1957.
As a Quantitative Researcher, your challenge is to develop a strategy that can consistently outperform the S&P 500, not just in one year, but over many years. This is a difficult task and is the primary goal of many hedge funds around the world.
The project is divided in parts:
@ -199,4 +201,5 @@ Note: `features_engineering.py` can be used in `gridsearch.py`
### Files for this project
You can find the data required for this project in this [link](https://assets.01-edu.org/ai-branch/project4/project04-20221031T173034Z-001.zip)
You can find the data required for this project in this :
[link](https://assets.01-edu.org/ai-branch/project4/project04-20221031T173034Z-001.zip)

View File

@ -0,0 +1,160 @@
## Serverless Payments Reminder
Serverless Payments Reminder is a basic Slack Bot that reminds companies / users to pay their bills. The reminder gets triggered by AWS EventBridge (also known as CloudWatch Events). The bot itself is hosted on a simple AWS Lambda function.
### Requirements for the task
Create a simple Slack bot using:
- [AWS](https://aws.amazon.com/)
- [Serverless Framework](https://serverless.com/)
- [Cloudformation](https://aws.amazon.com/cloudformation/)
- [Terraform](https://www.terraform.io/)
### Task
The task is separated into multiple different levels. The levels are ordered by complexity. You can choose the level that you feel most comfortable with.
## Level 0: Basic Lambda Function hosted via Serverless**
**1. Create Slack Bot**
Write a slack bot, that sends a reminder to slack channel with the following message:
```sh
Dear Board Members! This is a reminder to make the payment for the licenses of the software. The due date is 07.XX.YY, where XX is the month and YY is the year. The amount to be paid is $ZZZ.VV. Please make the payment as soon as possible to <IBAN_NUMBER>. Thank you!
```
The environment variables which must be available in the Lambda function are:
- `SLACK_WEBHOOK_URL` - The Slack Webhook URL
- `AMOUNT` - The amount to be paid
- `IBAN_NUMBER` - The IBAN number
The due date must be calculated based on the current date for the next month.
Before step two, you need to test that Slack Bot is functioning properly by running an application locally. It should send a message to the Slack channel.
**2. Write tests for the application**
Write integration tests for the application to ensure that each part of the application is functioning properly.
**3. Use [Serverless Framework](https://github.com/serverless/serverless) to host the lambda function**
Serverless framework is an automation tool used to deploy serverless applications which can help with event-driven architecture deployments.
Use the serverless framework to host the lambda function combined with AWS Eventbridge.
**Refs**
- [AWS EventBridge](https://www.serverless.com/framework/docs/providers/aws/events/event-bridge)
- [AWS Lambda Function](https://www.serverless.com/framework/docs/providers/aws/guide/functions).
AWS Eventbridge should trigger lambda function every month on the 1st day of the month at 10:00 AM.
**Note** Before hosting, you need to ensure that the bot is written in a way that it can be hosted on AWS Lambda. You can look at the example [here](https://github.com/KostLinux/aws-incident-manager-notifier/blob/56d52e90f8a14e689e7d2a1c7ee44590de5af2f5/main.go#L158).
## Level 1: Basic Lambda Function automation via Cloudformation**
**1. Repeat the 1st and 2nd step from Level 0**
Create a slack bot, that sends a reminder to slack channel. Write integration tests for the application to ensure that each part of the application is functioning properly.
**Note** Don't forget to add Lambda handler for the application
**2. Use [Cloudformation](https://aws.amazon.com/cloudformation/) to host the lambda function**
Cloudformation is Infrastructure As Code (IaC) tool used to automate the provisioning of AWS resources inside the AWS. It allows you to use a simple yaml syntax to define the resources you want to create.
**2.1 Hosting the Lambda function**
Write cloudformation template with parameters (variables) `SlackWebhookUrl`, `Amount`, `IbanNumber`. The template should automatically provision the Slackbot based on architecture mentioned in [Level 0](#level-0-basic-lambda-function-hosted-via-serverless) step 3.
## Level 2: Basic Lambda Function automation via Basic Terraform with Local State**
Terraform is an Infrastructure As Code (IaC) tool used to automate the provisioning of resources inside different SaaS solutions and Cloud providers. It uses HCL syntax, which is easy maintainable. The main problem of terraform is that you need to know, how everything works under the hood (e.g. IAM roles, policies, serverless platforms etc).
**1. Repeat the 1st and 2nd step from Level 0**
Create a slack bot, that sends a reminder to slack channel. Write integration tests for the application to ensure that each part of the application is functioning properly.
**Note** Don't forget to add Lambda handler for the application
**2. Automate the hosting via Terraform**
**2.1 Write an IAM role with the least privilege principle for the lambda function and EventBridge.**
**2.2 Automatically pack up the lambda function into a zip file**
**2.3 Build the lambda function based on the zip file**
**2.4 Build AWS Eventbridge, which will trigger the lambda function**
**Note** Lambda function should be triggered every month on the 1st day of the month at 10:00 AM.
At this level you can use local state for terraform, no modules needed to write.
## Level 3: Basic Lambda Function automation via Terraform Module with Remote S3 State (advanced)**
In a DevOps world, mostly we use remote state for terraform. The main reason is that we can share the state with other team members and we can easily manage the state of the infrastructure. Although it is a bit more complex to set up, it allows you to avoid issues when someone even removes the state file, cause S3 has versioning turned on.
This level is more advanced, but more near to the real-world scenario. As a DevOps Engineer / SRE mostly all the terraform code is written in modules (or using community pre-built modules). Modules help us to maintain the codebase and reuse the code in different projects, so you don't need to write the same code again and again.
**1. Repeat the steps from [Level 2](#level-2-basic-lambda-function-automation-via-basic-terraform-with-local-state)**
Repeat all the steps from Level 2, but now you need to write a terraform module for the lambda function and EventBridge. Which means that instead of writing values directly into main.tf file, everything should be written using variables. For example:
```tf
resource "aws_lambda_function" "slack_bot" {
function_name = var.function_name
role = var.role
handler = var.handler
runtime = var.runtime
timeout = var.timeout
}
```
We're using variables for each value, so we can reuse the module in different projects.
**2. Use remote state for terraform**
In this step you need to configure IAM User with least privilege principle for terraform to access S3 bucket, provision resources in Eventbridge and Lambda function.
**Note** We would recommend using [Cloudformation](https://aws.amazon.com/cloudformation/) to create an IAM User with the least privilege principle + S3 Bucket for the remote state. Cloudformation can be applied via AWS CLI which means that everything persists to be AS CODE.
**3. Use the S3 bucket for the remote state and module to provision the resources**
Write your backend configuration to backend.tf file:
```tf
terraform {
backend "s3" {
bucket = ""
key = ""
region = ""
encrypt = ""
}
}
```
Use the module written at step 1 to provision the resources
```tf
module "slack_bot" {
source = "./modules/slack_bot"
function_name = "slack_bot"
role = "arn:aws:iam::123456789012:role/lambda-role"
handler = "main.handler"
runtime = "nodejs14.x"
timeout = 10
}
```
## Helpful commands
- `serverless deploy` - Deploy the serverless application
- `aws cloudformation deploy --template-file template.yaml --stack-name my-stack` - Deploy the cloudformation stack
- `terraform init` - Initialize the terraform project and remote state.
- `terraform plan` - Plan the resources you're going to provision. It will show you the changes that will be made.
- `terraform apply` - Apply the terraform project only if you're sure that everything is correct during the plan.

View File

View File

@ -0,0 +1,3 @@
module "aws_slack_bot" {
source = "./modules/aws_slack_bot"
}