title
stringclasses
632 values
cleaned_text
stringclasses
607 values
text
stringclasses
632 values
Weekly Status Template
Executive Summary: !!!! Accomplishments: !!!! Issues for Escalations/Blockers: !!NIL!! Plan for next week: !!!! PTO and Vacation: Swathi: !!!! Jenani: !!!! Suraj: !!!! Anees: !!!! Keerthana: !!!! Jayaprakash: !!!! Kaushik: !!!! Shaik: !!!! Jeyanth: !!!! DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SANJAY RAVI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SANJAY RAVI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] JIRA TICKET(S) ON HOLD: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [KAUSHIK VISHAL SARAVANAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== * !!Brief Summary of the ticket in 2-3 points!!
###Human:tell me about Weekly Status Template ###assistant: Executive Summary: !!!! Accomplishments: !!!! Issues for Escalations/Blockers: !!NIL!! Plan for next week: !!!! PTO and Vacation: Swathi: !!!! Jenani: !!!! Suraj: !!!! Anees: !!!! Keerthana: !!!! Jayaprakash: !!!! Kaushik: !!!! Shaik: !!!! Jeyanth: !!!! DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! [SANJAY RAVI] !!JIRA-ID!!: !!Description!! – ==Closed.== !!Brief Summary of the ticket in 3-4 points!! JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] [SANJAY RAVI] !!JIRA-ID!!: !!Description!! – ==In Progress.== !!Brief Summary of the ticket in 2-3 points!! [ETA - !!Date!! ] JIRA TICKET(S) ON HOLD: [JENANI SAMPATH] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SURAJRAM SRINIVASAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [ANEES ASLAM] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [KEERTHANA SUNDARBABU] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [JAYAPRAKASH MASILAMANI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SWATHI RAJAMANI] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [KAUSHIK VISHAL SARAVANAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== !!Brief Summary of the ticket in 2-3 points!! [SHAIK SHUAIB AFNAN] !!JIRA-ID!!: !!Description!! – ==On Hold.== * !!Brief Summary of the ticket in 2-3 points!!
Incident Report: Compromise of Ubuntu Server on AWS
Incident Overview On June 7th, 2024, we received an alert indicating the disk usage on our AWS-hosted Ubuntu server airtel-dev-2 was conducting network scans. Subsequent investigation revealed that the server had been compromised via an outdated and publicly accessible Docker instance. Timeline and Findings 1. Observed that main disk was filling up Observed that /var/log/syslog was > 7 GB Tail the log and saw suspicious messages ex: httpd[1159514]: message repeated 18 times: [ Error running zzh command: fork/exec /usr/local/bin/pnscan: no such file or directory] Jun 7 21:40:07 airtel-dev-2 httpd[508]: Error creating output file: open .r.89.118.o: too many open files 2. Searched for httpd files 1. Did a search of the file system using 2. Found suspicious files in /etc/.httpd 1. /etc/.httpd/…/httpd 2. /etc/.httpd/…./httpd 3. Found scan results file in /etc/.httpd/…/ 1. ips_for_zgrab_120.txt 2. ips_for_zgrab_87.txt 3. scan_120.0.0.0_8.txt 4. scan_156.0.0.0_8.txt 5. scan_87.0.0.0_8.txt 4. Looked at the scan files and saw the results of scans for port 2375 5. Used the file command on the two suspicious httpd binaries and saw that they were executables written in golang 6. Used the strings command and observed the following in one of the binaries: 1. Saw that the commands untarred an executable from the base64 called vurl. This was then used to download a script whose content was attempted to be installed in cron jobs. 2. This seemed to have partially failed because the cron settings were not there when I checked for them 7. Conclusions: 1. The breach seemed to have been designed to use our server to scan other servers and to do so 3. Discovery of Installed Libraries 1. Observed that the syslog was still filling 2. Did not see any services using top that indicated that they were doing any activities 3. Observed suspicious .so files in /usr/local/lib: 1. /usr/local/lib/top.so 2. /usr/local/lib/pnscan.so 3. /usr/local/lib/masscan.so 4. /usr/local/lib/httpd.so 5. /usr/local/lib/bioset.so 4. Saw that these files were referenced in the /etc/ld.so.preload file 5. Removed the files and the /etc/ld.so.preload file 1. The initial remove attempt failed saying permission denied 2. Saw that the files had the immutable flag set using lsattr 3. Removed the immutable flag using chattr -i 4. Deleted the files 6. Restarted the server to restart all the applications running with these malicious shared libraries 7. Verified the shared libraries were no longer being used using lsof grep 4. Used top to see what was filling the logs still 1. After removing the .so files, top worked properly 2. Observed a command /var/tmp/.222/top 3. Investigated the directory and found more suspicious files 4. Used strings command to investigate the top binary and saw references to crypto currency mining 5. Tried removing the directory and killing the process 6. The process restarted 7. Searched for the top command in /etc 8. Found monitor.service file that was restarting it 9. Used systemctl to disable the service 10. Found tow other service files that had malicious scripts: 1. -rw-r--r-- 1 root root 157 Jun 7 10:22 zzhr.service 2. -rw-r--r-- 1 root root 159 Jun 7 10:22 zzhre.service 11. Disabled both of those services as well 12. The top service stopped running 5. Searched for files around the time of the attack 1. Found an a.tar file that contained the vurl command Attack Vector Entry Point: The attack vector was the Docker daemon (supposedly), which was outdated and publicly accessible, leading to exploitation. Result of the Attack 1. Installation of Scanning Software The attackers installed scanning software designed to scan for more publicly accessible Docker ports, likely to infect additional systems. 2. Additional Malicious Activities Crypto Mining: Identified binary associated with crypto mining, indicating cryptocurrency mining activities. Potential Further Activities: Other malicious activities yet to be fully detailed, likely involving the compromised system being used for further attacks or exploitation. --- Immediate Actions and Recommendations 1. Immediate Actions Taken/Need to be Taken Isolated the compromised server to prevent further damage. Conducted a thorough forensic analysis to understand the extent of the compromise. Updated Docker to the latest secure version. Reviewed and corrected Docker and server configurations to enhance security. Implemented stricter access controls and firewall rules to limit public exposure. 2. Recommendations for Future Prevention Regular Updates: Ensure regular updates and patch management for all software and services. Access Controls: Implement strict access controls and network segmentation to limit exposure. Configuration Management: Regularly review and audit service configurations. Monitoring and Logging: Enhance monitoring and logging to detect and respond to suspicious activities promptly. Security Training:** Conduct regular security training for IT staff to ensure awareness of best practices and emerging threats.
###Human:tell me about Incident Report: Compromise of Ubuntu Server on AWS ###assistant: Incident Overview On June 7th, 2024, we received an alert indicating the disk usage on our AWS-hosted Ubuntu server airtel-dev-2 was conducting network scans. Subsequent investigation revealed that the server had been compromised via an outdated and publicly accessible Docker instance. Timeline and Findings 1. Observed that main disk was filling up Observed that /var/log/syslog was > 7 GB Tail the log and saw suspicious messages ex: httpd[1159514]: message repeated 18 times: [ Error running zzh command: fork/exec /usr/local/bin/pnscan: no such file or directory] Jun 7 21:40:07 airtel-dev-2 httpd[508]: Error creating output file: open .r.89.118.o: too many open files 2. Searched for httpd files 1. Did a search of the file system using 2. Found suspicious files in /etc/.httpd 1. /etc/.httpd/…/httpd 2. /etc/.httpd/…./httpd 3. Found scan results file in /etc/.httpd/…/ 1. ips_for_zgrab_120.txt 2. ips_for_zgrab_87.txt 3. scan_120.0.0.0_8.txt 4. scan_156.0.0.0_8.txt 5. scan_87.0.0.0_8.txt 4. Looked at the scan files and saw the results of scans for port 2375 5. Used the file command on the two suspicious httpd binaries and saw that they were executables written in golang 6. Used the strings command and observed the following in one of the binaries: 1. Saw that the commands untarred an executable from the base64 called vurl. This was then used to download a script whose content was attempted to be installed in cron jobs. 2. This seemed to have partially failed because the cron settings were not there when I checked for them 7. Conclusions: 1. The breach seemed to have been designed to use our server to scan other servers and to do so 3. Discovery of Installed Libraries 1. Observed that the syslog was still filling 2. Did not see any services using top that indicated that they were doing any activities 3. Observed suspicious .so files in /usr/local/lib: 1. /usr/local/lib/top.so 2. /usr/local/lib/pnscan.so 3. /usr/local/lib/masscan.so 4. /usr/local/lib/httpd.so 5. /usr/local/lib/bioset.so 4. Saw that these files were referenced in the /etc/ld.so.preload file 5. Removed the files and the /etc/ld.so.preload file 1. The initial remove attempt failed saying permission denied 2. Saw that the files had the immutable flag set using lsattr 3. Removed the immutable flag using chattr -i 4. Deleted the files 6. Restarted the server to restart all the applications running with these malicious shared libraries 7. Verified the shared libraries were no longer being used using lsof grep 4. Used top to see what was filling the logs still 1. After removing the .so files, top worked properly 2. Observed a command /var/tmp/.222/top 3. Investigated the directory and found more suspicious files 4. Used strings command to investigate the top binary and saw references to crypto currency mining 5. Tried removing the directory and killing the process 6. The process restarted 7. Searched for the top command in /etc 8. Found monitor.service file that was restarting it 9. Used systemctl to disable the service 10. Found tow other service files that had malicious scripts: 1. -rw-r--r-- 1 root root 157 Jun 7 10:22 zzhr.service 2. -rw-r--r-- 1 root root 159 Jun 7 10:22 zzhre.service 11. Disabled both of those services as well 12. The top service stopped running 5. Searched for files around the time of the attack 1. Found an a.tar file that contained the vurl command Attack Vector Entry Point: The attack vector was the Docker daemon (supposedly), which was outdated and publicly accessible, leading to exploitation. Result of the Attack 1. Installation of Scanning Software The attackers installed scanning software designed to scan for more publicly accessible Docker ports, likely to infect additional systems. 2. Additional Malicious Activities Crypto Mining: Identified binary associated with crypto mining, indicating cryptocurrency mining activities. Potential Further Activities: Other malicious activities yet to be fully detailed, likely involving the compromised system being used for further attacks or exploitation. --- Immediate Actions and Recommendations 1. Immediate Actions Taken/Need to be Taken Isolated the compromised server to prevent further damage. Conducted a thorough forensic analysis to understand the extent of the compromise. Updated Docker to the latest secure version. Reviewed and corrected Docker and server configurations to enhance security. Implemented stricter access controls and firewall rules to limit public exposure. 2. Recommendations for Future Prevention Regular Updates: Ensure regular updates and patch management for all software and services. Access Controls: Implement strict access controls and network segmentation to limit exposure. Configuration Management: Regularly review and audit service configurations. Monitoring and Logging: Enhance monitoring and logging to detect and respond to suspicious activities promptly. Security Training:** Conduct regular security training for IT staff to ensure awareness of best practices and emerging threats.
Obstacle Detection
EXECUTIVE SUMMARY An object which causes disruption and chaos for the traffic and interfere in the flow. Provide in-vehicle warnings about upcoming obstacles for safer navigation. Examples: Road construction (barriers, block stones, block cones) Pedestrians interfere Pets interfere Vehicles interfere 2. OVERVIEW USE CASES - Autonomous Vehicle PROBLEM TO SOLVE To avoid collision and congestion int the traffic Precautions to stop before. SYSTEM INPUT Sensory inputs - camera SYSTEM OUTCOME Obstacle Detection (Identification and Localization) Tracking 3. PRE-REQUISITIES - PACKAGES Python 3.10 OpenCV Py Torch dcv dvc-s3 4. MACHINE LEARNING MODEL MODEL USED - YOLOV8l Benefits Low latency High accuracy than other object detection model MODEL USED - YOLOV8 Key features: YOLOv8 utilizes state-of-the-art CNN architectures in its backbone and neck for efficient feature extraction from images. This translates to better object detection accuracy. YOLOv8 predicts the object's center directly. This simplifies the detection process and potentially improves efficiency while maintaining accuracy. YOLOv8is designed to achieve a good balance between object detection accuracy and processing speed. This makes it suitable for real-time applications where fast performance is crucial. Metrices mAP mAP50 mAP75 mAPs FLOPS(FloatingPointOperations) 5. MODEL ARCHITECTURE Figure 1 — image adapted from: https://blog.roboflow.com/whats-new-in-yolov8 6. SYSTEM DEVELOPMENT PLAN I. The Object Detection Dataset to Train YOLOV8l (pretrained Roboflow-100 and Coco dataset) II. Train YOLOV8l on Custom Dataset III. Fine Tuning YOLOV8l Models IV. Inference on Test Images V. YOLOV8l Trained Model for Video 7. PIPELINE 7.1 DATA COLLECTION - DATASET KITTI vision benchmark suite It is a popular dataset for autonomous vehicle research tasks, especially in obstacle detection. It provides a variety of sensor data collected from a vehicle driving through various urban environments in Karlsruhe, Germany. Class Labels: "Car", "Van", "Truck", "Pedestrian", "Cyclist", "Tram", "Misc", "DontCare" 7.2 DATA TO BE TRAINED Images (front cam view of car) Labels (classes) Label Metric Notations: 7.3 DATA SPLIT UP Datafolder --->Training ---> Images ---> Labels >Testing ---> Images ---> Labels > Validation ---> Images ---> Labels 7.4 EVALUATION METRICS mAP(mean Average Precision) Ground truth measurements - Inferences Bounding boxes Tracking Confidence Score Classification (Multiclass models-based scenarios) Annotation of the objects (identify the object) OBSTACLE DETECTION CALIBRATION Confidence threshold - confidence score level 0 to 1 Higher than or equal - scored and labeled/classified to that class Below - get discarded 7.5 LOSS FUNCTIONS: Cross Entropy loss Intersection over union (IoU) loss 7.6 OPTIMIZATIONS: Pruning and Quantization Distillated model - pre trained Models Post-processing of YOLO-V8l detections: Apply techniques like non-max suppression (NMS) to refine the bounding boxes generated by YOLOV8l. (help to reduce reduntant detection) Detection with tracking - SORT algorithm 8. DOCUMENTATION REFERENCES https://yolov8.org/
###Human:tell me about Obstacle Detection ###assistant: EXECUTIVE SUMMARY An object which causes disruption and chaos for the traffic and interfere in the flow. Provide in-vehicle warnings about upcoming obstacles for safer navigation. Examples: Road construction (barriers, block stones, block cones) Pedestrians interfere Pets interfere Vehicles interfere 2. OVERVIEW USE CASES - Autonomous Vehicle PROBLEM TO SOLVE To avoid collision and congestion int the traffic Precautions to stop before. SYSTEM INPUT Sensory inputs - camera SYSTEM OUTCOME Obstacle Detection (Identification and Localization) Tracking 3. PRE-REQUISITIES - PACKAGES Python 3.10 OpenCV Py Torch dcv dvc-s3 4. MACHINE LEARNING MODEL MODEL USED - YOLOV8l Benefits Low latency High accuracy than other object detection model MODEL USED - YOLOV8 Key features: YOLOv8 utilizes state-of-the-art CNN architectures in its backbone and neck for efficient feature extraction from images. This translates to better object detection accuracy. YOLOv8 predicts the object's center directly. This simplifies the detection process and potentially improves efficiency while maintaining accuracy. YOLOv8is designed to achieve a good balance between object detection accuracy and processing speed. This makes it suitable for real-time applications where fast performance is crucial. Metrices mAP mAP50 mAP75 mAPs FLOPS(FloatingPointOperations) 5. MODEL ARCHITECTURE Figure 1 — image adapted from: https://blog.roboflow.com/whats-new-in-yolov8 6. SYSTEM DEVELOPMENT PLAN I. The Object Detection Dataset to Train YOLOV8l (pretrained Roboflow-100 and Coco dataset) II. Train YOLOV8l on Custom Dataset III. Fine Tuning YOLOV8l Models IV. Inference on Test Images V. YOLOV8l Trained Model for Video 7. PIPELINE 7.1 DATA COLLECTION - DATASET KITTI vision benchmark suite It is a popular dataset for autonomous vehicle research tasks, especially in obstacle detection. It provides a variety of sensor data collected from a vehicle driving through various urban environments in Karlsruhe, Germany. Class Labels: "Car", "Van", "Truck", "Pedestrian", "Cyclist", "Tram", "Misc", "DontCare" 7.2 DATA TO BE TRAINED Images (front cam view of car) Labels (classes) Label Metric Notations: 7.3 DATA SPLIT UP Datafolder --->Training ---> Images ---> Labels >Testing ---> Images ---> Labels > Validation ---> Images ---> Labels 7.4 EVALUATION METRICS mAP(mean Average Precision) Ground truth measurements - Inferences Bounding boxes Tracking Confidence Score Classification (Multiclass models-based scenarios) Annotation of the objects (identify the object) OBSTACLE DETECTION CALIBRATION Confidence threshold - confidence score level 0 to 1 Higher than or equal - scored and labeled/classified to that class Below - get discarded 7.5 LOSS FUNCTIONS: Cross Entropy loss Intersection over union (IoU) loss 7.6 OPTIMIZATIONS: Pruning and Quantization Distillated model - pre trained Models Post-processing of YOLO-V8l detections: Apply techniques like non-max suppression (NMS) to refine the bounding boxes generated by YOLOV8l. (help to reduce reduntant detection) Detection with tracking - SORT algorithm 8. DOCUMENTATION REFERENCES https://yolov8.org/
Face Matching System
Use case Description: Performs facial recognition by detecting faces, extracting features, and identifying individuals. FaceNet512 is an Image Recognition model and has cascade classifier are used to perform face matching system. Model Used: FaceNet512 Hardware Support: Raspberry PI PI camera Base Model GitHub Link: GitLab Link to Repository and steps to Train: http://gitlab.embedur.local/cloud/tinyml/-/tree/face-matching Backbone Research Paper: Dataset Link: Optimizations Done: Optimized Data Preprocessing: Mean subtraction and scaling aligns input data with training data for better model performance. Data Augmentation for Robustness: Variations in brightness, rotation, and size help the model handle real-world image variations. Training Focused on Generalization: Early stopping prevents overfitting, improving performance on unseen data. Learning rate scheduler allows faster convergence and potentially better generalization. Transfer Learning for Efficiency: Leverages pre-trained FaceNet512 for a powerful base. Fine-tuned Model Architecture: Freezing initial layers prevents overfitting by keeping pre-trained knowledge. Global average pooling captures global features for improved accuracy. Dense layer with class-specific neurons tailors the model to your specific classes (people). Download Formats: Tflite Input and Output Images: Input Images: Output Images: JSON File Format:**
###Human:tell me about Face Matching System ###assistant: Use case Description: Performs facial recognition by detecting faces, extracting features, and identifying individuals. FaceNet512 is an Image Recognition model and has cascade classifier are used to perform face matching system. Model Used: FaceNet512 Hardware Support: Raspberry PI PI camera Base Model GitHub Link: GitLab Link to Repository and steps to Train: http://gitlab.embedur.local/cloud/tinyml/-/tree/face-matching Backbone Research Paper: Dataset Link: Optimizations Done: Optimized Data Preprocessing: Mean subtraction and scaling aligns input data with training data for better model performance. Data Augmentation for Robustness: Variations in brightness, rotation, and size help the model handle real-world image variations. Training Focused on Generalization: Early stopping prevents overfitting, improving performance on unseen data. Learning rate scheduler allows faster convergence and potentially better generalization. Transfer Learning for Efficiency: Leverages pre-trained FaceNet512 for a powerful base. Fine-tuned Model Architecture: Freezing initial layers prevents overfitting by keeping pre-trained knowledge. Global average pooling captures global features for improved accuracy. Dense layer with class-specific neurons tailors the model to your specific classes (people). Download Formats: Tflite Input and Output Images: Input Images: Output Images: JSON File Format:**
Infrastructure Defect Detection System
Use case Description: The Infrastructure Defect Detection System (IDDS) is a comprehensive solution designed to detect, monitor, and report defects or anomalies in various types of infrastructures. The Inception-v4 model is an advanced convolutional neural network architecture optimized for image classification tasks, featuring inception modules, factorization techniques, residual connections, and auxiliary classifiers for improved performance and efficiency. Model Used: Inception V4 Model Hardware Support: Raspberry PI Base Model GitHub Link: GitLab Link to Repository and steps to Train: http://gitlab.embedur.local/cloud/tinyml/-/tree/infrastructure-defect-detection Backbone Research Paper: Dataset Link: https://image-net.org/challenges/LSVRC/index.php Optimizations Done: Optimized Data Preprocessing: Mean subtraction and scaling aligns input data with training data for better model performance. Data Augmentation for Robustness: Variations in brightness, rotation, and size help the model handle real-world image variations. Training Focused on Generalization: Early stopping prevents overfitting, improving performance on unseen data. Learning rate scheduler allows faster convergence and potentially better generalization. Transfer Learning for Efficiency: Leverages pre-trained Inception V4 Model for a powerful base. Fine-tuned Model Architecture: Freezing initial layers prevents overfitting and preserves pre-trained knowledge. Global average pooling captures comprehensive structural features for precise defect detection. Incorporating class-specific neurons tailors the model to identify distinct types of defects efficiently. Download Formats: Tflite Input and Output Images: Input Images: Output Images: JSON File Format:
###Human:tell me about Infrastructure Defect Detection System ###assistant: Use case Description: The Infrastructure Defect Detection System (IDDS) is a comprehensive solution designed to detect, monitor, and report defects or anomalies in various types of infrastructures. The Inception-v4 model is an advanced convolutional neural network architecture optimized for image classification tasks, featuring inception modules, factorization techniques, residual connections, and auxiliary classifiers for improved performance and efficiency. Model Used: Inception V4 Model Hardware Support: Raspberry PI Base Model GitHub Link: GitLab Link to Repository and steps to Train: http://gitlab.embedur.local/cloud/tinyml/-/tree/infrastructure-defect-detection Backbone Research Paper: Dataset Link: https://image-net.org/challenges/LSVRC/index.php Optimizations Done: Optimized Data Preprocessing: Mean subtraction and scaling aligns input data with training data for better model performance. Data Augmentation for Robustness: Variations in brightness, rotation, and size help the model handle real-world image variations. Training Focused on Generalization: Early stopping prevents overfitting, improving performance on unseen data. Learning rate scheduler allows faster convergence and potentially better generalization. Transfer Learning for Efficiency: Leverages pre-trained Inception V4 Model for a powerful base. Fine-tuned Model Architecture: Freezing initial layers prevents overfitting and preserves pre-trained knowledge. Global average pooling captures comprehensive structural features for precise defect detection. Incorporating class-specific neurons tailors the model to identify distinct types of defects efficiently. Download Formats: Tflite Input and Output Images: Input Images: Output Images: JSON File Format:
Transfer Learning on YOLO
To train
###Human:tell me about Transfer Learning on YOLO ###assistant: To train
Guide to Hardening an Ubuntu Server - an attempt to prevent future breaches
Package Management Uninstall Unnecessary Packages: Remove any packages that are not required for the server's functionality. Vulnerability Scanning: Implement a vulnerability scanner to identify packages with known vulnerabilities. Update vulnerable packages individually to avoid unnecessary updates and potential downtime. 2. Secure SSH Use ED25519 Keys: Generate and use ED25519 keys instead of RSA keys for improved security. Disable Root Login: Set in . Disable Password Authentication: Set in . Fail2Ban: Install and configure fail2ban to prevent brute-force attacks. 3. Firewall Configuration External Firewall: Ensure that the firewall is properly configured outside of the servers, either through a dedicated firewall instance or by utilizing the security group settings in AWS. Docker Considerations: Be cautious when implementing local firewalls as Docker uses iptables for its own traffic management. Coordinate with the team to avoid potential conflicts. 4. Disable Unnecessary Services and Kernel Modules Identify and Disable Services: Use to identify and disable unnecessary services. Disable Unnecessary Kernel Modules: Identify and disable unnecessary kernel modules. 5. User Management Consistent User Logins: Maintain consistent user logins across the servers to simplify management. Certificate Authority-based Authentication: Explore the implementation of Certificate Authority-based authentication using opensource tools. Discuss the feasibility and benefits with the team. 6. Enable SELinux or AppArmor SELinux: Enable and configure SELinux. AppArmor: Ensure AppArmor is enabled and enforce policies. 7. Logging and Monitoring Centralized Logging: Investigate centralized logging stacks such as the ELK stack (Elasticsearch, Logstash, Kibana) or other technologies that the team has experience with. Log Analysis: Utilize the centralized logging solution for log analysis and monitoring. 8. Implement Intrusion Detection and Prevention IDS/IPS: Deploy IDS/IPS systems like Snort or Suricata. 9. Secure Network Services HTTPS Configuration: Review and ensure proper HTTPS configuration for the specific HTTP server being used (e.g., Nginx, Apache). Service Communication: Audit the communication between services to identify any areas where security can be enhanced. 10. Implement File and Directory Permissions Least Privilege: Set appropriate file and directory permissions. 11. Data Encryption Granular Encryption: Continue encrypting sensitive data at a more granular level rather than relying on full disk encryption. 12. Perform Regular Security Audits Security Audits: Conduct regular security audits using tools like Lynis, OpenVAS, or Nessus. 13. Implement Kernel Hardening Kernel Security Features: Enable kernel security features like Address Space Layout Randomization (ASLR) and Execute Disable (NX) bit. Hardened Kernel: Consider using a hardened kernel, such as the grsecurity kernel patch or the Linux Kernel Runtime Guard (LKRG). 14. Secure Protocols Protocol Audit: Conduct an audit of the protocols used for communication between services. Identify any insecure protocols and plan for their replacement. 15. Network Segmentation Traffic Segregation: Develop a plan to segregate traffic between servers based on their roles and security requirements. Access Control: Implement access control measures to restrict traffic between network segments. 16. Secure DNS Configuration Trusted DNS Servers: Configure your server to use trusted DNS servers, preferably over encrypted protocols like DNS over HTTPS (DoH) or DNS over TLS (DoT). DNSSEC Validation: Implement DNSSEC validation to ensure the integrity and authenticity of DNS responses. 17. Restrict Cron and At Jobs Limit Access: Limit access to the and utilities to authorized users only. Monitor Jobs: Regularly review and monitor cron jobs and at jobs for suspicious or unauthorized entries. 18. Monitor System Integrity Integrity Monitoring: Use tools like Tripwire or AIDE (Advanced Intrusion Detection Environment) to monitor the integrity of critical system files and detect unauthorized changes. Integrity Checks: Regularly run integrity checks and compare the results against a known-good baseline. 19. Educate and Train Users Security Awareness Training: Provide regular training to users on best security practices. 20. Conduct Penetration Testing Regular Penetration Testing: Perform regular penetration testing to identify vulnerabilities and weaknesses in your server's security posture. Security Assessments: Engage with experienced security professionals or use automated tools like Metasploit or Nmap for thorough assessments. Regular Review and Updates Stay Informed: Regularly review and update hardening measures to keep up with emerging threats and best practices. Test changes in a non-production environment before applying them to production servers to avoid unintended disruptions. Next Steps Prioritize the implementation of the hardening measures based on the team's feedback and the organization's security requirements. * Regularly review and update the hardening guide to keep up with emerging threats and best practices.
###Human:tell me about Guide to Hardening an Ubuntu Server - an attempt to prevent future breaches ###assistant: Package Management Uninstall Unnecessary Packages: Remove any packages that are not required for the server's functionality. Vulnerability Scanning: Implement a vulnerability scanner to identify packages with known vulnerabilities. Update vulnerable packages individually to avoid unnecessary updates and potential downtime. 2. Secure SSH Use ED25519 Keys: Generate and use ED25519 keys instead of RSA keys for improved security. Disable Root Login: Set in . Disable Password Authentication: Set in . Fail2Ban: Install and configure fail2ban to prevent brute-force attacks. 3. Firewall Configuration External Firewall: Ensure that the firewall is properly configured outside of the servers, either through a dedicated firewall instance or by utilizing the security group settings in AWS. Docker Considerations: Be cautious when implementing local firewalls as Docker uses iptables for its own traffic management. Coordinate with the team to avoid potential conflicts. 4. Disable Unnecessary Services and Kernel Modules Identify and Disable Services: Use to identify and disable unnecessary services. Disable Unnecessary Kernel Modules: Identify and disable unnecessary kernel modules. 5. User Management Consistent User Logins: Maintain consistent user logins across the servers to simplify management. Certificate Authority-based Authentication: Explore the implementation of Certificate Authority-based authentication using opensource tools. Discuss the feasibility and benefits with the team. 6. Enable SELinux or AppArmor SELinux: Enable and configure SELinux. AppArmor: Ensure AppArmor is enabled and enforce policies. 7. Logging and Monitoring Centralized Logging: Investigate centralized logging stacks such as the ELK stack (Elasticsearch, Logstash, Kibana) or other technologies that the team has experience with. Log Analysis: Utilize the centralized logging solution for log analysis and monitoring. 8. Implement Intrusion Detection and Prevention IDS/IPS: Deploy IDS/IPS systems like Snort or Suricata. 9. Secure Network Services HTTPS Configuration: Review and ensure proper HTTPS configuration for the specific HTTP server being used (e.g., Nginx, Apache). Service Communication: Audit the communication between services to identify any areas where security can be enhanced. 10. Implement File and Directory Permissions Least Privilege: Set appropriate file and directory permissions. 11. Data Encryption Granular Encryption: Continue encrypting sensitive data at a more granular level rather than relying on full disk encryption. 12. Perform Regular Security Audits Security Audits: Conduct regular security audits using tools like Lynis, OpenVAS, or Nessus. 13. Implement Kernel Hardening Kernel Security Features: Enable kernel security features like Address Space Layout Randomization (ASLR) and Execute Disable (NX) bit. Hardened Kernel: Consider using a hardened kernel, such as the grsecurity kernel patch or the Linux Kernel Runtime Guard (LKRG). 14. Secure Protocols Protocol Audit: Conduct an audit of the protocols used for communication between services. Identify any insecure protocols and plan for their replacement. 15. Network Segmentation Traffic Segregation: Develop a plan to segregate traffic between servers based on their roles and security requirements. Access Control: Implement access control measures to restrict traffic between network segments. 16. Secure DNS Configuration Trusted DNS Servers: Configure your server to use trusted DNS servers, preferably over encrypted protocols like DNS over HTTPS (DoH) or DNS over TLS (DoT). DNSSEC Validation: Implement DNSSEC validation to ensure the integrity and authenticity of DNS responses. 17. Restrict Cron and At Jobs Limit Access: Limit access to the and utilities to authorized users only. Monitor Jobs: Regularly review and monitor cron jobs and at jobs for suspicious or unauthorized entries. 18. Monitor System Integrity Integrity Monitoring: Use tools like Tripwire or AIDE (Advanced Intrusion Detection Environment) to monitor the integrity of critical system files and detect unauthorized changes. Integrity Checks: Regularly run integrity checks and compare the results against a known-good baseline. 19. Educate and Train Users Security Awareness Training: Provide regular training to users on best security practices. 20. Conduct Penetration Testing Regular Penetration Testing: Perform regular penetration testing to identify vulnerabilities and weaknesses in your server's security posture. Security Assessments: Engage with experienced security professionals or use automated tools like Metasploit or Nmap for thorough assessments. Regular Review and Updates Stay Informed: Regularly review and update hardening measures to keep up with emerging threats and best practices. Test changes in a non-production environment before applying them to production servers to avoid unintended disruptions. Next Steps Prioritize the implementation of the hardening measures based on the team's feedback and the organization's security requirements. * Regularly review and update the hardening guide to keep up with emerging threats and best practices.
Models Progress
Public-Site General Models Progress Week (Dates) Planned Completed In Progress 1 (7th June) 20 20 0 2(14th June) 20 20 0 Public-Site Usecase Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 2 2 0 2(14th June) 3 2 1 Synaptics General Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 20 20 0 2(14th June) 20 20 0 Synaptics Usecase Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 2 2 0 2(14th June) 3 0 3 Public General Models Completed ResNet 101 Mobile Net V3 Small ShuffleNet SqueezeNet AlexNet ResNet 50 Mediapipe -Pose Mediapipe -Segmentation Inception V3 MNasNet YoloV8 - Seg YoloV8 - Detection YoloV8 - Pose YoloV5 - Detection FastSAM YoloV8 - Face WakeWord Detection YolACT MobileNet V3 Large DenseNet-121 EfficientNet B3 RegNetX 800mf FCN ResNet 101 Efficient Net B0 GoogleNet ResNext 50 ResNext 101 RegNetX 400mf MobileNet V2 DeeplabV3 MobileNet V3 Efficient Net V2 Small Efficient Net B7 MNas Net 1_3 SimpleNet V1_5m_m2 Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu Regnet_y_2_3gf Public General Models Planned (Next update) Simplenetv1_9m_m2 Proxyless_mobile Proxyless_gpu Public Usecase Models Completed Driver Distraction System Face Matching System Infrastructure Defect Detection System YolACT on 480 x 480 Public Usecase Models Planned Obstacle Detection Synaptics General Models Completed (With NPU metrics) ResNet 101 Mobile Net V3 Small ShuffleNet SqueezeNet AlexNet ResNet 50 Mediapipe -Pose Mediapipe -Segmentation Inception V3 MNasNet YoloV8 - Seg YoloV8 - Detection YoloV8 - Pose YoloV5 - Detection FastSAM YoloV8 - Face WakeWord Detection YolACT MobileNet V3 Large DenseNet-121 EfficientNet B3 RegNetX 800mf FCN ResNet 101 Efficient Net B0 GoogleNet ResNext 50 ResNext 101 RegNetX 400mf MobileNet V2 DeeplabV3 MobileNet V3 Efficient Net V2 Small Efficient Net B7 MNas Net 1_3 SimpleNet V1_5m_m2 Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu Regnet_y_2_3gf *
###Human:tell me about Models Progress ###assistant: Public-Site General Models Progress Week (Dates) Planned Completed In Progress 1 (7th June) 20 20 0 2(14th June) 20 20 0 Public-Site Usecase Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 2 2 0 2(14th June) 3 2 1 Synaptics General Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 20 20 0 2(14th June) 20 20 0 Synaptics Usecase Models Progress Week(Dates) Planned Completed In Progress 1(7th June) 2 2 0 2(14th June) 3 0 3 Public General Models Completed ResNet 101 Mobile Net V3 Small ShuffleNet SqueezeNet AlexNet ResNet 50 Mediapipe -Pose Mediapipe -Segmentation Inception V3 MNasNet YoloV8 - Seg YoloV8 - Detection YoloV8 - Pose YoloV5 - Detection FastSAM YoloV8 - Face WakeWord Detection YolACT MobileNet V3 Large DenseNet-121 EfficientNet B3 RegNetX 800mf FCN ResNet 101 Efficient Net B0 GoogleNet ResNext 50 ResNext 101 RegNetX 400mf MobileNet V2 DeeplabV3 MobileNet V3 Efficient Net V2 Small Efficient Net B7 MNas Net 1_3 SimpleNet V1_5m_m2 Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu Regnet_y_2_3gf Public General Models Planned (Next update) Simplenetv1_9m_m2 Proxyless_mobile Proxyless_gpu Public Usecase Models Completed Driver Distraction System Face Matching System Infrastructure Defect Detection System YolACT on 480 x 480 Public Usecase Models Planned Obstacle Detection Synaptics General Models Completed (With NPU metrics) ResNet 101 Mobile Net V3 Small ShuffleNet SqueezeNet AlexNet ResNet 50 Mediapipe -Pose Mediapipe -Segmentation Inception V3 MNasNet YoloV8 - Seg YoloV8 - Detection YoloV8 - Pose YoloV5 - Detection FastSAM YoloV8 - Face WakeWord Detection YolACT MobileNet V3 Large DenseNet-121 EfficientNet B3 RegNetX 800mf FCN ResNet 101 Efficient Net B0 GoogleNet ResNext 50 ResNext 101 RegNetX 400mf MobileNet V2 DeeplabV3 MobileNet V3 Efficient Net V2 Small Efficient Net B7 MNas Net 1_3 SimpleNet V1_5m_m2 Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu Regnet_y_2_3gf *
Weekly Status Report -13th June
Executive Summary: Model Zoo: Worked on completing the Login components with the token authentication for the Synaptics model zoo. Worked on adding the model details page with the model download and the instructions to use the model. Working on bringing up the About Us and Contact Us pages for the site. Collected metrics for the 40 general models and 3 usecase models for the public site and working on adding the additional usecase models. Working on collecting the next set of models for the Synaptics model zoo and the usecase models. Accomplishments: Dev: AI-390: Curation of model details for each of the selected models - 4 models - ==Closed.== AI-427: Curation of model details for each of the selected models -5 models- ==Closed.== AI-428: Collection of model metrics for the set of selected 5 models including RPI – ==Closed.== AI- 485: Clean up DB to remove unwanted details – ==Closed.== AI-462: Set full names while displaying platform and download formats – ==Closed.== AI-430: Collection of model metrics for Synaptics next 5 models. – ==Closed.== AI-452: Benchmark metrics for SL1680 – ==Closed== AI-432: Metrics Collection of Face matching use case for SL1680 – ==Closed.== AI-407: Metrics Collection of Driver Distraction use case for SL1680 – ==Closed.== AI-355:UI for user login page with unit testing – ==Closed.== AI-360: Backend APIs for User Login unit testing – ==Closed.== AI-437: Collection of model metrics for the set of selected next 5 models including RPI –==Closed.== AI-365: APIs for model details, metrics and download options – ==Closed.== AI-363: UI for model details, metrics and download options – ==Closed.== AI-433: Curation of model details for each of the selected models - next 5 models – ==Closed.== AI-436: Defect Detection System Training – ==Closed.== AI-438: Metrics collection of use case - RPI – ==Closed.== AI-490: Collection of metrics for the Synaptics 5 models - ==In Progress.== AI-442: Training of the Obstacle Detection usecase – ==In Progress.== UI Changes for Authentication – ==In Progress.== AI-368: UI for Other page– ==In Progress== AI-369: APIs for Other pages– ==In Progress== DevOps: AI-381: Deployment pipeline setup of public model zoo site - Prod – ==Closed.== AI-382: Planning of deployment of model zoos for synaptics - Prod – ==Closed.== AI-459: Postgres DB backup for model zoo servers - ==Closed.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Working on enhancements to the model details page . Bug fixes on the UI and UX front for the Synaptics and Public sites. Setting up responsiveness for mobile sites. Release of the protected Login enabled Beta site. Setting up of the instances and deployment plan for the Public and Synaptics sites PTO and Vacation: NIL DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-381: Deployment pipeline setup of public model zoo site - Prod – ==Closed.== Completed the addition of beta and prod stages in pipeline for both frontend and backend. whenever code is merged to respective branch pipeline will be triggered for that environment. Added documentation for the same. [JENANI SAMPATH] AI-382: Planning of deployment of model zoos for synaptics - Prod – ==Closed.== Completed the pipeline for both frontend and backend for synaptics. whenever code is merged to respective branch pipeline will be triggered for that environment. Added documentation for the same. [JENANI SAMPATH] AI-390: Curation of model details for each of the selected models - 4 models - ==Closed.== Collected model metrics for 4 models : 1. Yolov8face 2. wake_word tflite 3. yolact 4. Mobilenetv3 large [JENANI SAMPATH] AI-427: Curation of model details for each of the selected models -5 models- ==Closed.== Collected model metrics for 5 models 1. Resnext 101 2. Regnet400mf 3. Mobilenet v2 4. goolenet 5. resnext50 [JENANI SAMPATH] AI-459: Postgres DB backup for model zoo servers - ==Closed.== Attached s3 access IAM role to EC2 instances. Added DB backup script to cron-job which is triggered everyday at 10 AM IST. Backup dump of postgres is pushed to s3 bucket. [SURAJRAM SRINIVASAN] AI-428: Collection of model metrics for the set of selected 5 models including RPI – ==Closed.== Collected the model metrics for the next set of 5 models and pushed them to the DB. Completed loading the 40 general models into the DB for the public site. [SURAJRAM SRINIVASAN] AI- 485: Clean up DB to remove unwanted details – ==Closed.== Removed the unwated categories, domains and other unrelated values from the DB. Cleaned up the DB and fixed the missing values. [SURAJRAM SRINIVASAN] AI-462: Set full names while displaying platform and download formats – ==Closed.== Modified the DB to set the full names instead of the alias names for the platforms and download formats. Verified the same and avoided short forms for the fields in the DB. [SURAJRAM SRINIVASAN] AI-430: Collection of model metrics for Synaptics next 5 models. – ==Closed.== Setup the pipeline for the NPU metrics in the SL1680. Collected the metrics using the SL1680 for the next set of 5 models and pushed the data to the DB. [ANEES ASLAM] AI-452: Benchmark metrics for SL1680 – ==Closed.== Benchmark_model tool updated for inference on NPU with SDK v1.0.0 (delegates) Integrated with SL1680 pipeline to perform Layer-wise metrics collection. [ANEES ASLAM] AI-432: Metrics Collection of Face matching use case for SL1680 – ==Closed.== Completed uploading model metrics of Resnet 512 based Facial features matching on SL1680. [ANEES ASLAM] AI-407: Metrics Collection of Driver Distraction use case for SL1680 – ==Closed.== Completed uploading model metrics of InceptionV3net based Driver distraction detection on SL1680. [KEERTHANA SUNDARBABU] AI-355:UI for user login page with unit testing – ==Closed.== Login UI has been integrated with the backend All the APIs are fixed with authentication headers Login expiration, session handling and logout functionalities are completed [JAYAPRAKASH MASILAMANI] AI-360: Backend APIs for User Login unit testing – ==Closed.== Integrated the login and logout API with the frontend nextJS application. Implemented the unit test for login and logout API using Jest. Deployed the Login and Logout APIs in the synaptics rest backend. [JAYAPRAKASH MASILAMANI] AI-437: Collection of model metrics for the set of selected next 5 models including RPI – ==Closed.== Collected the details of the below models and ran the model metrics pipleine: 1. FCN Resnet 101 2. EfficientNet B0 3. DeeplabV3 MobileNetV3 4. EfficientNetV2 Small 5. EfficientNet B7 [SWATHI RAJAMANI] AI-365: APIs for model details, metrics and download options – ==Closed.== Added API route for model details and metrics Made schema changes Added functionality support for usecase and details api [SWATHI RAJAMANI] AI-363: UI for model details, metrics and download options – ==Closed.== Updated Model details page UI based on latest. Integrate details API [SHAIK SHUAIB AFNAN] AI-433: Curation of model details for each of the selected models - next 5 models – ==Closed.== Completed collection of metrics for model zoo of the below models and pushed successfully Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu regnet_y_2_3 [SHAIK SHUAIB AFNAN] AI-436: Defect Detection System Training – ==Closed.== Made a Infrastructure defect detection system which is a image classification model built using inceptionv4 model. Built the inceptionv4 model from scratch as there is no api implementation for the model in tensorflow or pytorch. Performed integration with dvc and mlflow for storing dataset into s3 bucket and logged parameters & artifacts into mlflow. Made a readme file which has all contents about use case, model, data collection, training and prediction. [SHAIK SHUAIB AFNAN] AI-438: Metrics collection of use case - RPI – ==Closed.== Completed collection of metrics and pushing the inceptionv4 model for infrastructure defect detection model to the database. JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-490: Collection of metrics for the Synaptics 5 models - ==In Progress.== Collecting metrics for sl1680 for the below models Regnet400mf Mobilenet v2ngoolenet resnext50 wakeword [ETA - 14/06 ] [SURAJRAM SRINIVASAN] AI-442: Training of the Obstacle Detection usecase – ==In Progress.== Started training the Obstacle detection system usecase using the KITTI dataset. Working on the optimizations for the training and generating the metrics for the usecase [ETA - 14/06 ] [JAYAPRAKASH MASILAMANI] UI Changes for Authentication – ==In Progress.== Created AppContext Wrapper for protected routing for every pages. Implemeted protected routing for base pages in Synaptics UI. [ETA - 17/06 ] [SANJAY RAVI] AI-368: UI for Other page– ==In Progress== Need to implement UI components for Other Page and contact us page with Proper Transitions [ETA - 14/06 ] [SANJAY RAVI] AI-369: APIs for Other pages– ==In Progress== Updating API for model zoo Other Page and contact Page [ETA - 14/06]**
###Human:tell me about Weekly Status Report -13th June ###assistant: Executive Summary: Model Zoo: Worked on completing the Login components with the token authentication for the Synaptics model zoo. Worked on adding the model details page with the model download and the instructions to use the model. Working on bringing up the About Us and Contact Us pages for the site. Collected metrics for the 40 general models and 3 usecase models for the public site and working on adding the additional usecase models. Working on collecting the next set of models for the Synaptics model zoo and the usecase models. Accomplishments: Dev: AI-390: Curation of model details for each of the selected models - 4 models - ==Closed.== AI-427: Curation of model details for each of the selected models -5 models- ==Closed.== AI-428: Collection of model metrics for the set of selected 5 models including RPI – ==Closed.== AI- 485: Clean up DB to remove unwanted details – ==Closed.== AI-462: Set full names while displaying platform and download formats – ==Closed.== AI-430: Collection of model metrics for Synaptics next 5 models. – ==Closed.== AI-452: Benchmark metrics for SL1680 – ==Closed== AI-432: Metrics Collection of Face matching use case for SL1680 – ==Closed.== AI-407: Metrics Collection of Driver Distraction use case for SL1680 – ==Closed.== AI-355:UI for user login page with unit testing – ==Closed.== AI-360: Backend APIs for User Login unit testing – ==Closed.== AI-437: Collection of model metrics for the set of selected next 5 models including RPI –==Closed.== AI-365: APIs for model details, metrics and download options – ==Closed.== AI-363: UI for model details, metrics and download options – ==Closed.== AI-433: Curation of model details for each of the selected models - next 5 models – ==Closed.== AI-436: Defect Detection System Training – ==Closed.== AI-438: Metrics collection of use case - RPI – ==Closed.== AI-490: Collection of metrics for the Synaptics 5 models - ==In Progress.== AI-442: Training of the Obstacle Detection usecase – ==In Progress.== UI Changes for Authentication – ==In Progress.== AI-368: UI for Other page– ==In Progress== AI-369: APIs for Other pages– ==In Progress== DevOps: AI-381: Deployment pipeline setup of public model zoo site - Prod – ==Closed.== AI-382: Planning of deployment of model zoos for synaptics - Prod – ==Closed.== AI-459: Postgres DB backup for model zoo servers - ==Closed.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Working on enhancements to the model details page . Bug fixes on the UI and UX front for the Synaptics and Public sites. Setting up responsiveness for mobile sites. Release of the protected Login enabled Beta site. Setting up of the instances and deployment plan for the Public and Synaptics sites PTO and Vacation: NIL DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-381: Deployment pipeline setup of public model zoo site - Prod – ==Closed.== Completed the addition of beta and prod stages in pipeline for both frontend and backend. whenever code is merged to respective branch pipeline will be triggered for that environment. Added documentation for the same. [JENANI SAMPATH] AI-382: Planning of deployment of model zoos for synaptics - Prod – ==Closed.== Completed the pipeline for both frontend and backend for synaptics. whenever code is merged to respective branch pipeline will be triggered for that environment. Added documentation for the same. [JENANI SAMPATH] AI-390: Curation of model details for each of the selected models - 4 models - ==Closed.== Collected model metrics for 4 models : 1. Yolov8face 2. wake_word tflite 3. yolact 4. Mobilenetv3 large [JENANI SAMPATH] AI-427: Curation of model details for each of the selected models -5 models- ==Closed.== Collected model metrics for 5 models 1. Resnext 101 2. Regnet400mf 3. Mobilenet v2 4. goolenet 5. resnext50 [JENANI SAMPATH] AI-459: Postgres DB backup for model zoo servers - ==Closed.== Attached s3 access IAM role to EC2 instances. Added DB backup script to cron-job which is triggered everyday at 10 AM IST. Backup dump of postgres is pushed to s3 bucket. [SURAJRAM SRINIVASAN] AI-428: Collection of model metrics for the set of selected 5 models including RPI – ==Closed.== Collected the model metrics for the next set of 5 models and pushed them to the DB. Completed loading the 40 general models into the DB for the public site. [SURAJRAM SRINIVASAN] AI- 485: Clean up DB to remove unwanted details – ==Closed.== Removed the unwated categories, domains and other unrelated values from the DB. Cleaned up the DB and fixed the missing values. [SURAJRAM SRINIVASAN] AI-462: Set full names while displaying platform and download formats – ==Closed.== Modified the DB to set the full names instead of the alias names for the platforms and download formats. Verified the same and avoided short forms for the fields in the DB. [SURAJRAM SRINIVASAN] AI-430: Collection of model metrics for Synaptics next 5 models. – ==Closed.== Setup the pipeline for the NPU metrics in the SL1680. Collected the metrics using the SL1680 for the next set of 5 models and pushed the data to the DB. [ANEES ASLAM] AI-452: Benchmark metrics for SL1680 – ==Closed.== Benchmark_model tool updated for inference on NPU with SDK v1.0.0 (delegates) Integrated with SL1680 pipeline to perform Layer-wise metrics collection. [ANEES ASLAM] AI-432: Metrics Collection of Face matching use case for SL1680 – ==Closed.== Completed uploading model metrics of Resnet 512 based Facial features matching on SL1680. [ANEES ASLAM] AI-407: Metrics Collection of Driver Distraction use case for SL1680 – ==Closed.== Completed uploading model metrics of InceptionV3net based Driver distraction detection on SL1680. [KEERTHANA SUNDARBABU] AI-355:UI for user login page with unit testing – ==Closed.== Login UI has been integrated with the backend All the APIs are fixed with authentication headers Login expiration, session handling and logout functionalities are completed [JAYAPRAKASH MASILAMANI] AI-360: Backend APIs for User Login unit testing – ==Closed.== Integrated the login and logout API with the frontend nextJS application. Implemented the unit test for login and logout API using Jest. Deployed the Login and Logout APIs in the synaptics rest backend. [JAYAPRAKASH MASILAMANI] AI-437: Collection of model metrics for the set of selected next 5 models including RPI – ==Closed.== Collected the details of the below models and ran the model metrics pipleine: 1. FCN Resnet 101 2. EfficientNet B0 3. DeeplabV3 MobileNetV3 4. EfficientNetV2 Small 5. EfficientNet B7 [SWATHI RAJAMANI] AI-365: APIs for model details, metrics and download options – ==Closed.== Added API route for model details and metrics Made schema changes Added functionality support for usecase and details api [SWATHI RAJAMANI] AI-363: UI for model details, metrics and download options – ==Closed.== Updated Model details page UI based on latest. Integrate details API [SHAIK SHUAIB AFNAN] AI-433: Curation of model details for each of the selected models - next 5 models – ==Closed.== Completed collection of metrics for model zoo of the below models and pushed successfully Unet Wide resnet50 FCN resnet50 Densenet201 Proxyless_cpu regnet_y_2_3 [SHAIK SHUAIB AFNAN] AI-436: Defect Detection System Training – ==Closed.== Made a Infrastructure defect detection system which is a image classification model built using inceptionv4 model. Built the inceptionv4 model from scratch as there is no api implementation for the model in tensorflow or pytorch. Performed integration with dvc and mlflow for storing dataset into s3 bucket and logged parameters & artifacts into mlflow. Made a readme file which has all contents about use case, model, data collection, training and prediction. [SHAIK SHUAIB AFNAN] AI-438: Metrics collection of use case - RPI – ==Closed.== Completed collection of metrics and pushing the inceptionv4 model for infrastructure defect detection model to the database. JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-490: Collection of metrics for the Synaptics 5 models - ==In Progress.== Collecting metrics for sl1680 for the below models Regnet400mf Mobilenet v2ngoolenet resnext50 wakeword [ETA - 14/06 ] [SURAJRAM SRINIVASAN] AI-442: Training of the Obstacle Detection usecase – ==In Progress.== Started training the Obstacle detection system usecase using the KITTI dataset. Working on the optimizations for the training and generating the metrics for the usecase [ETA - 14/06 ] [JAYAPRAKASH MASILAMANI] UI Changes for Authentication – ==In Progress.== Created AppContext Wrapper for protected routing for every pages. Implemeted protected routing for base pages in Synaptics UI. [ETA - 17/06 ] [SANJAY RAVI] AI-368: UI for Other page– ==In Progress== Need to implement UI components for Other Page and contact us page with Proper Transitions [ETA - 14/06 ] [SANJAY RAVI] AI-369: APIs for Other pages– ==In Progress== Updating API for model zoo Other Page and contact Page [ETA - 14/06]**
Direct API
The new direct API allows for sending commands to a device through the MQTT topic "device//data_req". nThe agent can access the available data in memory and respond through the topic "device//data_resp." nThe messages received on the "data_req" topic can be of type GET or SET with correlation id. The corresponding "data_resp" messages can consist of runtime data/response of the device with correlation id. nTherefore, the agent can be adjusted to store the most recent runtime data in memory. Ideas is that we can modify the agent to keep a latest runtime data snapshot ready (periodic poll and store in a json in-memory) that can be queried, and a response can be put on a response topic by querying the runtime data snapshot that is available. Agent Direct API is supported only by the agents with upgraded with this feature. For legacy agents, we would be responding with “Not Supported API“. This is achieved by a feature flag received from the device The direct API should be validated at the rest service based on the flag before initiating the request to agent Schema: Below is the schema changes to be added to commonfeatures in accesspoint runtime schema. Sample request Response for GET: Sample request Response for POST: Status Code Message 200 Channel Changed successfully 206 Channel Changed for either radio_24 or radio_5 204 No action performed 500 Error occurred while processing the request Cloud Workflow: The new Go service, called Direct req service, will utilize emqx to send an Mqtt request to the device. In response, the device will provide runtime data through an Mqtt response topic. Bridge will be listening the Mqtt response topic of emqx, and upon receiving a message, it will handle it just like other runtime data and publish it to the devices.opt_direct data topic in kafka. The New Go service, called Direct resp service, will subscribe to the kafka topic devices.opt_direct data. It will process the message in a similar way to the json inserter for runtime data and save it in cassandra. Additionally, the service will store the same data in the redis cache with a configurable time-to-live (TTL). For each API call made to retrieve the runtime attribute data, the direct API service will initially search for the device and its corresponding request attribute values in redis. If the value is found in redis, the service will provide a response. However, if the value is not found, the service will then search in cassandra and initiate a direct request (mqtt request) to the device to obtain the updated or new data. It will then wait for a specified time period. If the request is not processed within this time interval, the service will return a timeout. The request and response will be associated with a unique session ID for correlation. The latest version of the get API includes a feature that allows it to determine whether it should retrieve the most recent data from Cassandra or request the latest data directly from the device. High-level architecture. New topics - runtime.directv3 and devices.directRuntime Here's a brief description of the components: New Go Service (Direct Req Service): EMQX MQTT Broker: Bridge: Kafka: New Go Service (Direct Resp Service): API Gateway (APISix): Clients (API Consumers): n==Need to include more details about the data formats, error handling, and scalability considerations.== Sample Schema accesspoint.json 7555 radio_5.json 27642 radio_24.json 17404 wlan.json 4374 wifi_client.json 5436 eth_client.json 1280 Sample requests and response Accesspoint Client Radio 2.4 Radio 5 Data Model Tables used in Cassandra Direct Look-up table(config.direct_lookup) Lookup table for direct v1 session id using type, id and timestamp Column type type (Partition key) text id (Partition key) text timestamp (Clustering key) int session_id text status text Redis Key: -type Value: Map Kafka/EMQTT Topic Reference Diagram
###Human:tell me about Direct API ###assistant: The new direct API allows for sending commands to a device through the MQTT topic "device//data_req". nThe agent can access the available data in memory and respond through the topic "device//data_resp." nThe messages received on the "data_req" topic can be of type GET or SET with correlation id. The corresponding "data_resp" messages can consist of runtime data/response of the device with correlation id. nTherefore, the agent can be adjusted to store the most recent runtime data in memory. Ideas is that we can modify the agent to keep a latest runtime data snapshot ready (periodic poll and store in a json in-memory) that can be queried, and a response can be put on a response topic by querying the runtime data snapshot that is available. Agent Direct API is supported only by the agents with upgraded with this feature. For legacy agents, we would be responding with “Not Supported API“. This is achieved by a feature flag received from the device The direct API should be validated at the rest service based on the flag before initiating the request to agent Schema: Below is the schema changes to be added to commonfeatures in accesspoint runtime schema. Sample request Response for GET: Sample request Response for POST: Status Code Message 200 Channel Changed successfully 206 Channel Changed for either radio_24 or radio_5 204 No action performed 500 Error occurred while processing the request Cloud Workflow: The new Go service, called Direct req service, will utilize emqx to send an Mqtt request to the device. In response, the device will provide runtime data through an Mqtt response topic. Bridge will be listening the Mqtt response topic of emqx, and upon receiving a message, it will handle it just like other runtime data and publish it to the devices.opt_direct data topic in kafka. The New Go service, called Direct resp service, will subscribe to the kafka topic devices.opt_direct data. It will process the message in a similar way to the json inserter for runtime data and save it in cassandra. Additionally, the service will store the same data in the redis cache with a configurable time-to-live (TTL). For each API call made to retrieve the runtime attribute data, the direct API service will initially search for the device and its corresponding request attribute values in redis. If the value is found in redis, the service will provide a response. However, if the value is not found, the service will then search in cassandra and initiate a direct request (mqtt request) to the device to obtain the updated or new data. It will then wait for a specified time period. If the request is not processed within this time interval, the service will return a timeout. The request and response will be associated with a unique session ID for correlation. The latest version of the get API includes a feature that allows it to determine whether it should retrieve the most recent data from Cassandra or request the latest data directly from the device. High-level architecture. New topics - runtime.directv3 and devices.directRuntime Here's a brief description of the components: New Go Service (Direct Req Service): EMQX MQTT Broker: Bridge: Kafka: New Go Service (Direct Resp Service): API Gateway (APISix): Clients (API Consumers): n==Need to include more details about the data formats, error handling, and scalability considerations.== Sample Schema accesspoint.json 7555 radio_5.json 27642 radio_24.json 17404 wlan.json 4374 wifi_client.json 5436 eth_client.json 1280 Sample requests and response Accesspoint Client Radio 2.4 Radio 5 Data Model Tables used in Cassandra Direct Look-up table(config.direct_lookup) Lookup table for direct v1 session id using type, id and timestamp Column type type (Partition key) text id (Partition key) text timestamp (Clustering key) int session_id text status text Redis Key: -type Value: Map Kafka/EMQTT Topic Reference Diagram
Security Tasks
[ ] Penetration Testing - [x] Investigate AWS Policies for Penetration testing - [x] Develop Pen Test Plan for Astral Instance - [ ] Review the test plan internally - [ ] Get permission from AWS to proceed with test plan - [ ] Execute the pentest plan - [ ] Software Development Lifecyle - [x] Get access to GITLAB http://gitlab.embedur.local and create a login - [x] Start looking into NodeJS/JavaScript vulnerability scanning - [ ] Setup automated scanning for the Astral REST server - [ ] Setup automated scanning for the Astral WEB server - [x] Look into Golang vulnerability Scanning - [ ] Setup automated scanning for Golang projects
###Human:tell me about Security Tasks ###assistant: [ ] Penetration Testing - [x] Investigate AWS Policies for Penetration testing - [x] Develop Pen Test Plan for Astral Instance - [ ] Review the test plan internally - [ ] Get permission from AWS to proceed with test plan - [ ] Execute the pentest plan - [ ] Software Development Lifecyle - [x] Get access to GITLAB http://gitlab.embedur.local and create a login - [x] Start looking into NodeJS/JavaScript vulnerability scanning - [ ] Setup automated scanning for the Astral REST server - [ ] Setup automated scanning for the Astral WEB server - [x] Look into Golang vulnerability Scanning - [ ] Setup automated scanning for Golang projects
Status - Week ending on 07/06
*Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: The EMS staging server at ems-staging.int.embedur.com was not operational. After flushing the DNS cache and configuring the resolvers to use Google's nameservers, the issue was resolved. Internal Environments - QA/Dev: A new LLM model has been developed for faster responses, incorporating a caching mechanism to enhance speed. A new worker node has been added to the Airtel-QA3 server. The model's performance has been benchmarked, and the service is now ready for testing. Singtel Production: Created API spec document which contains list of basic API calls in VMware Singtel Staging environment. Also shared Postman collection JSON to customer to proceed with Penetration testing. Jeanette confirmed that NAT IP message consumption is fine but still they’re not able to consume for runtime streaming topics. Need to have a joint troubleshooting call with customer for debugging. Airtel Production: Currently the ORC processor is filling data of widgets in Summary Page and some widgets in Reports page. Since the HDFS path in production is complex, created an intermediatory files which will be the base for other spark jobs. Currently we are providing daily Self-Healing report which gives all the devices which crossed the configured threshold in rule engine for CPU / memory / temperature / coverage hole / Interference. The requirement from Airtel is to compare the number of occurrences with more than threshold for a device with the total number of samples in cloud. Airtel reported for some of the devices the reboot counts on TR and Embedur doesn’t match. So, for some of the devices we have collected the data at regular interval on both sides for further investigation.n *Tickets information: Closed Tickets Description DEVOPS-1163 EMS Caddy Malfunction DEVOPS-1164 Setup Raspberry Pi 4 AIR-3708 LLM based resolution AIR-3709 JSON query processing AIR-3715 Day wise access point / radio details with required fields AIR-3716 Change in Self-healing report by comparing total number of samples In-Progress Tickets Description ETA AIR-3647 Create a design document and update WIKI 10/06/2024 AIR-3707 Prompt Engine for Recommendation Service 14/06/2024 SMN-739 Kafka Message consumption from VMware Prod 14/06/2024 SMN-742 Multi domain Cert for Singtel Staging (SAN Certs) 14/06/2024 SMN-744 Penetration testing for VMware Production setup 14/06/2024 AIR-3717 Reboot count analysis 14/06/2024 PTO & Vacation: Plan for next week: Data Analytics The to-do list includes creating a Spark job for hourly statistics on Radio 2.4 and 5 data, retrieving data from HDFS, converting it to CSV and ORC formats, loading it into PySpark, transforming it into a DataFrame, and updating it in Cassandra.n Penetration testing and SFTP server connection establishment Penetration testing has to be done from Customer end, as we’ve shared the API Spec Document and postman JSON collection. Since DNAT, Firewall and routing rules has been added, SFTP server connection has to be established. Need to have a troubleshooting call with customer to resolve this issue. Reboot analysis As we got the reboot dump from the TR side as well, the reboot counts for those devices will be compared with the Embedur dump. *Detailed Summary: Recommendation Service A new LLM model has been developed for faster responses, incorporating a caching mechanism to enhance speed. A new worker node has been added to the Airtel-QA3 server. New prompts and a problem classification template have been developed. The model's performance has been benchmarked, and the service is now ready for testing. Caddy Malfunction The EMS staging server at ems-staging.int.embedur.com was not operational. After flushing the DNS cache and configuring the resolvers to use Google's nameservers, the issue was resolved. It appears the problem originated from the nameserver configuration. By removing and re-adding the nameserver records, we were able to restore proper functionality. The server is now accessible and working correctly. Raspberry Pi 4 The new Raspberry Pi 4 setup is complete with Ubuntu 22.04, including all essential packages and tools for our projects. SSH is configured, ensuring secure remote access. Python 3 and necessary libraries are installed, streamlining our development process. The system is now ready for immediate use and project deployment.n Singtel Penetration Testing and Kafka Message consumption Created API Spec document which contains list of basic API calls in VMware Singtel Staging environment. API Calls related to accesspoint, wlan, client, radio, group profile and push notification feature were added as part of API Spec. Also shared Postman collection JSON file to customer to proceed with Penetration testing. Customer confirmed that NAT IP message consumption is fine, but still need to have a debugging call with them, as runtime streaming topics aren’t consumed consistently. Spark - Intermediatory file All the necessary fields for the types - radio_5, radio_24, client and accesspoint points has been included and the data for those values has been generated. A scheduler has been placed every hour to generate the intermediatory files. So, for every hour a single orc file will be created for every type - accesspoint, radio_5 etc. Self-Healing Summary The entire logic has been written in Py spark using the intermediate hourly file and the output will contains the devices if its occurrence percentage is greater than or equal to 50%. Tested the report generation for Airtel using the intermediatory files and it is working as expected. Automated the process as well, so the problematic devices table will be sent to Airtel as a part of daily self-healing summary mail. Reboot analysis Using spark fetched the reboot count data from ORC files for the conflicting devices provided by Airtel. * We have the data for the day - 05/06/2024 and the TR data has also been provided by Airtel. Need to verify the reboot counts for all the time intervals between these two files.
###Human:tell me about Status - Week ending on 07/06 ###assistant: *Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: The EMS staging server at ems-staging.int.embedur.com was not operational. After flushing the DNS cache and configuring the resolvers to use Google's nameservers, the issue was resolved. Internal Environments - QA/Dev: A new LLM model has been developed for faster responses, incorporating a caching mechanism to enhance speed. A new worker node has been added to the Airtel-QA3 server. The model's performance has been benchmarked, and the service is now ready for testing. Singtel Production: Created API spec document which contains list of basic API calls in VMware Singtel Staging environment. Also shared Postman collection JSON to customer to proceed with Penetration testing. Jeanette confirmed that NAT IP message consumption is fine but still they’re not able to consume for runtime streaming topics. Need to have a joint troubleshooting call with customer for debugging. Airtel Production: Currently the ORC processor is filling data of widgets in Summary Page and some widgets in Reports page. Since the HDFS path in production is complex, created an intermediatory files which will be the base for other spark jobs. Currently we are providing daily Self-Healing report which gives all the devices which crossed the configured threshold in rule engine for CPU / memory / temperature / coverage hole / Interference. The requirement from Airtel is to compare the number of occurrences with more than threshold for a device with the total number of samples in cloud. Airtel reported for some of the devices the reboot counts on TR and Embedur doesn’t match. So, for some of the devices we have collected the data at regular interval on both sides for further investigation.n *Tickets information: Closed Tickets Description DEVOPS-1163 EMS Caddy Malfunction DEVOPS-1164 Setup Raspberry Pi 4 AIR-3708 LLM based resolution AIR-3709 JSON query processing AIR-3715 Day wise access point / radio details with required fields AIR-3716 Change in Self-healing report by comparing total number of samples In-Progress Tickets Description ETA AIR-3647 Create a design document and update WIKI 10/06/2024 AIR-3707 Prompt Engine for Recommendation Service 14/06/2024 SMN-739 Kafka Message consumption from VMware Prod 14/06/2024 SMN-742 Multi domain Cert for Singtel Staging (SAN Certs) 14/06/2024 SMN-744 Penetration testing for VMware Production setup 14/06/2024 AIR-3717 Reboot count analysis 14/06/2024 PTO & Vacation: Plan for next week: Data Analytics The to-do list includes creating a Spark job for hourly statistics on Radio 2.4 and 5 data, retrieving data from HDFS, converting it to CSV and ORC formats, loading it into PySpark, transforming it into a DataFrame, and updating it in Cassandra.n Penetration testing and SFTP server connection establishment Penetration testing has to be done from Customer end, as we’ve shared the API Spec Document and postman JSON collection. Since DNAT, Firewall and routing rules has been added, SFTP server connection has to be established. Need to have a troubleshooting call with customer to resolve this issue. Reboot analysis As we got the reboot dump from the TR side as well, the reboot counts for those devices will be compared with the Embedur dump. *Detailed Summary: Recommendation Service A new LLM model has been developed for faster responses, incorporating a caching mechanism to enhance speed. A new worker node has been added to the Airtel-QA3 server. New prompts and a problem classification template have been developed. The model's performance has been benchmarked, and the service is now ready for testing. Caddy Malfunction The EMS staging server at ems-staging.int.embedur.com was not operational. After flushing the DNS cache and configuring the resolvers to use Google's nameservers, the issue was resolved. It appears the problem originated from the nameserver configuration. By removing and re-adding the nameserver records, we were able to restore proper functionality. The server is now accessible and working correctly. Raspberry Pi 4 The new Raspberry Pi 4 setup is complete with Ubuntu 22.04, including all essential packages and tools for our projects. SSH is configured, ensuring secure remote access. Python 3 and necessary libraries are installed, streamlining our development process. The system is now ready for immediate use and project deployment.n Singtel Penetration Testing and Kafka Message consumption Created API Spec document which contains list of basic API calls in VMware Singtel Staging environment. API Calls related to accesspoint, wlan, client, radio, group profile and push notification feature were added as part of API Spec. Also shared Postman collection JSON file to customer to proceed with Penetration testing. Customer confirmed that NAT IP message consumption is fine, but still need to have a debugging call with them, as runtime streaming topics aren’t consumed consistently. Spark - Intermediatory file All the necessary fields for the types - radio_5, radio_24, client and accesspoint points has been included and the data for those values has been generated. A scheduler has been placed every hour to generate the intermediatory files. So, for every hour a single orc file will be created for every type - accesspoint, radio_5 etc. Self-Healing Summary The entire logic has been written in Py spark using the intermediate hourly file and the output will contains the devices if its occurrence percentage is greater than or equal to 50%. Tested the report generation for Airtel using the intermediatory files and it is working as expected. Automated the process as well, so the problematic devices table will be sent to Airtel as a part of daily self-healing summary mail. Reboot analysis Using spark fetched the reboot count data from ORC files for the conflicting devices provided by Airtel. * We have the data for the day - 05/06/2024 and the TR data has also been provided by Airtel. Need to verify the reboot counts for all the time intervals between these two files.
Pipeline for Deployment
Architecture The Deployment Architecture: 1. Source code is pushed to dev branch and application is deployed to dev server. The development teams verify their changes pushed to dev server . Once all the changes look fine the developers push the code from dev branch to QA branch. 2. The application now gets deployed in QA server where the QA team will perform Quality checks and tests. Once it is approved by QA team the maintainers push the code from QA branch to Beta server. 3. The application now gets deployed to Beta server , which will always have the working version of our app. From beta branch the code will be pushed to prod. 4. In this step the merge request has to be approved by a maintainer and once approved changes will be deployed to the production server. Frontend Pipeline The model zoo app frontend deployment pipeline consists two stages: 1. Build_model_zoo_app : To build docker image from the code changes made. 2. Deploy_model_zoo_app: To deploy the docker compose for newly built image. Gitlab-ci 1. The deploy stage must be modified and added for every environment such as dev, QA, release beta and production. 2. The refs indicate when the stage should be triggered i.e: while merged to QA will be deployed to QA respectively. 3. The Host details and credentials are set as a part of the gitlab variables only users who have authorization can view it. 4. The QA, dev branches are protected meaning only authorized maintainers can merge to that branch in order to trigger deployment. Note: Same process is followed for synaptics Backend Pipeline The Backend pipeline consists of two stages: 1. Build_model_zoo_backend : To build docker image from the code changes made. 2. Deploy_model_zoo_backend: To deploy the docker compose for newly built image. Gitlab-ci 1. The deploy stage must be modified and added for every environment such as dev, QA, release beta and production. 2. The refs indicate when the stage should be triggered i.e: while merged to QA will be deployed to QA respectively. 3. The Host details and credentials are set as a part of the gitlab variables only users who have authorization can view it. 4. The QA, dev branches are protected meaning only authorized maintainers can merge to that branch in order to trigger deployment. Note: Same process is followed for synaptics
###Human:tell me about Pipeline for Deployment ###assistant: Architecture The Deployment Architecture: 1. Source code is pushed to dev branch and application is deployed to dev server. The development teams verify their changes pushed to dev server . Once all the changes look fine the developers push the code from dev branch to QA branch. 2. The application now gets deployed in QA server where the QA team will perform Quality checks and tests. Once it is approved by QA team the maintainers push the code from QA branch to Beta server. 3. The application now gets deployed to Beta server , which will always have the working version of our app. From beta branch the code will be pushed to prod. 4. In this step the merge request has to be approved by a maintainer and once approved changes will be deployed to the production server. Frontend Pipeline The model zoo app frontend deployment pipeline consists two stages: 1. Build_model_zoo_app : To build docker image from the code changes made. 2. Deploy_model_zoo_app: To deploy the docker compose for newly built image. Gitlab-ci 1. The deploy stage must be modified and added for every environment such as dev, QA, release beta and production. 2. The refs indicate when the stage should be triggered i.e: while merged to QA will be deployed to QA respectively. 3. The Host details and credentials are set as a part of the gitlab variables only users who have authorization can view it. 4. The QA, dev branches are protected meaning only authorized maintainers can merge to that branch in order to trigger deployment. Note: Same process is followed for synaptics Backend Pipeline The Backend pipeline consists of two stages: 1. Build_model_zoo_backend : To build docker image from the code changes made. 2. Deploy_model_zoo_backend: To deploy the docker compose for newly built image. Gitlab-ci 1. The deploy stage must be modified and added for every environment such as dev, QA, release beta and production. 2. The refs indicate when the stage should be triggered i.e: while merged to QA will be deployed to QA respectively. 3. The Host details and credentials are set as a part of the gitlab variables only users who have authorization can view it. 4. The QA, dev branches are protected meaning only authorized maintainers can merge to that branch in order to trigger deployment. Note: Same process is followed for synaptics
Device Inventory
Last audited on June 12, 2024 Devices for AI/ML S.No Device Owner Current User Location Comments 1. Raspberry PI 3B+ (EURRPI-22) Keerthana/Bala Keerthana Workstation 122 Working 2. Linux Thinkpad Laptop (EURLTP -448) Keerthana/Bala Workstation 122 Working 3. EURLTP-157 (Windows 10) Jenani/Bala Jenani Lab B Working 4. EURLTP-153 (Windows 10) Jenani/Bala Jenani Lab B Working 5. EURLTP-113 (Windows 10) Jenani/Bala Jenani Lab B Working 6 Raspberry PI 4B (EURRPI-31) Suraj/Bala Suraj Lab B Working 7 PI camera Suraj/Bala Suraj Lab B Working 8 3 - PiBOX Micro HDMI cablenn1. EURCABLE-19n2. EURCABLE-20n3. EURCABLE-21 Bala Workstation 106 Working 9 Raspberry Pi 5 (EURRPI-35), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 10 Raspberry Pi 5 (EURRPI-36), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 11 Raspberry Pi 5 (EURRPI-37), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 12 3 - Raspberry PI 5MP camera board modulenn1. EURBOARD-15n2. EURBOARD-16n3. EURBOARD-17 Bala Workstation 106 Working 13 Google Coral USB Edge TPU ML Accelerator coprocessor (EURPRO-01)nnIncludes:n1. Google Coral USB Edge TPU ML Accelerator coprocessorn2. Type-C USB cable Bala Workstation 106 Working 14 2 - Robocraze USB 2.0 Mini Microphone for Raspberry Pi 3, 2 Module B & Rpi 1 Model B+/B Laptop Desktop Pcs Skype Voip Voice Recognition Softwarenn1. EURRCR-01n2. EURRCR-02 Bala Workstation 106 Working Devices for TinyML S.No Device Owner Current user Location Comments 1. Arduino Tiny Machine Learning KitArduino Nano 33 BLE Sense board9 axis inertial sensor: what makes this board ideal for wearable deviceshumidity, and temperature sensor: to get highly accurate measurements of the environmental conditionsbarometric sensor: you could make a simple weather stationmicrophone: to capture and analyse sound in real timegesture, proximity, light color and light intensity sensor : estimate the room’s luminosity, but also whether someone is moving close to the board1 OV7675 Camera1 Arduino Tiny Machine Learning Shield1 USB A to Micro USB Cable Bala Bharath Workstation 110 Working device 2 ESP32-CAM Wi-Fi Bluetooth Development Board DC 5V Dual-core Wireless with OV2640 Camera TF Card Module Bala Bharath Workstation 110 Working device 3 Breadboard 840 point with jumper wires Set (40+40+40) Bala Workstation 91 Working device 4 OV7670 Camera image sensor Bala Bharath Workstation 110 Working device Devices for SYNAPTICS AIML S.No Device Owner Current user Location Comments 1. SR110 A0(Label: 35)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cablen Bala Workstation 91 Working device 2. SR110 A0(Label: 4)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cable Bala Workstation 91 Working device 3. SR110 A0(Label: 3)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cable Bala Workstation 91 Need to flash image using JTAG 4. SR110 A0(Label: 2)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB B type cable Bala Workstation 91 Currently in not working state 5. VS680_EVK (S/N: 2021030504)nnIncludes:n 1. Power adaptern 2. Mini USB B cablen 3. SPI flashn 4. Camera interface boardn 5. JTAG interface board Bala Workstation 106 All the devices mentioned are placed inside a single box 6. J-Link (S/N: 602007378) Bala Workstation 91 Working device 7. Wipro Garnet B22 Smart LED Bulb Bala Workstation 91 Working device 8. 7 port USB hub Bala Workstation 91 Working device 9. Lenovo laptop (EURLTP - 275)n→ Windows 11 pro Bala Jeyanth Workstation 106 Working device and have USB access. 10. Lenovo laptopn(EURLTP - 256)n→ Ubuntu 20.04.6 Bala Lab B Working device 11. NUCLEO-144 STM32L4R5ZI Evaluation Board (P/N: 497-17668-ND)n→ STM32L4R5 Nucleo-144 STM32L4 ARM® Cortex®-M4 MCU 32-Bit Embedded Evaluation Boardn Bala Workstation 91 Working device 12. STM32 NUCLEO EXPANSION BOARD POW ((P/N: 497-17665-ND) Bala Workstation 91 Working device 13. Arduino UNO R3 board with DIP ATmega328P Bala Workstation 91 Working device 14. I2C Bi-Directional Logic Level Converter-4 Channel Bala Workstation 91 Working device 15. 7 Port USB hub Bala Jeyanth Workstation 91 Working device 16. Logitech BCC950 conference cam (EURVCAM-1) Bala Bala Workstation 106 Working device 17. 7 Port USB hub Bala Bala Workstation 106 Working device 18. Samsung monitor (EURMTR006) Bala Bala Workstation 106 Working device 19. Wired keyboard (11AZFITLET) and mouse Bala Bala Workstation 106 Working device 20. SL1680 EVK (S/N: K588000055)nnIncludes:n 1. USB C cable (C plugs on both ends)n 2. AC Power adaptern 3. AC plug adaptern 4. USB-A to UART bridge Bala Bala Workstation 106 Working device 21. SL1680 EVK (S/N: K588000042)nnIncludes:n 1. USB C cable (C plugs on both ends)n 2. AC Power adaptern 3. AC plug adaptern 4. USB-A to UART bridge Bala Jeyanth Workstation 91 Working device 22. Ethernet cable Bala Bala Workstation 106 Working device 23. 8- C type USB cablesnn1. EURCABLE-11n2. EURCABLE-12n3. EURCABLE-13n4. EURCABLE-14n5. EURCABLE-15n6. EURCABLE-16n7. EURCABLE-17n8. EURCABLE-18 Bala nnnJeyanthnnnnBala Workstation 91 Working device 24. Sabre Evaluation Board (S/N: AOI6059561)nnIncludes:n 1. C type USB cablen 2. B type USB cable Bala Workstation 106 Working device 25. STM32N6570-DK n(EDK32N6570)nnIncludes:n1. AI camera module (MB1854B)n2. Display (MB1860B)n3. STM32N6570 and a daughter board (C240400148) Bala Workstation 106 Working device 26. SR110 B0(Label: 8)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. 2 - USB C type cable Bala Workstation 106 Working device 27. SR110 B0(Label: 9)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. 2 - USB C type cable Bala Workstation 106 Working device ESP32-S3 Devices S.No. Device Owner Current user Location Comments 1. ESP32-S3 Boardn(EURESP-01)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device 2. ESP32-S3 Boardn(EURESP-02)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device 3. ESP32-S3 Boardn(EURESP-03)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device
###Human:tell me about Device Inventory ###assistant: Last audited on June 12, 2024 Devices for AI/ML S.No Device Owner Current User Location Comments 1. Raspberry PI 3B+ (EURRPI-22) Keerthana/Bala Keerthana Workstation 122 Working 2. Linux Thinkpad Laptop (EURLTP -448) Keerthana/Bala Workstation 122 Working 3. EURLTP-157 (Windows 10) Jenani/Bala Jenani Lab B Working 4. EURLTP-153 (Windows 10) Jenani/Bala Jenani Lab B Working 5. EURLTP-113 (Windows 10) Jenani/Bala Jenani Lab B Working 6 Raspberry PI 4B (EURRPI-31) Suraj/Bala Suraj Lab B Working 7 PI camera Suraj/Bala Suraj Lab B Working 8 3 - PiBOX Micro HDMI cablenn1. EURCABLE-19n2. EURCABLE-20n3. EURCABLE-21 Bala Workstation 106 Working 9 Raspberry Pi 5 (EURRPI-35), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 10 Raspberry Pi 5 (EURRPI-36), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 11 Raspberry Pi 5 (EURRPI-37), → Single Board Computer, 8 GB RAM, 2.4GHz 64-bit Quad-core Arm Cortex-A76 Processor, Bluetooth 5.0, BLE WirelessnnIncludes:n1. Raspberry Pi 5 boardn2. AC power adapter (USB-C) Bala Workstation 106 Working 12 3 - Raspberry PI 5MP camera board modulenn1. EURBOARD-15n2. EURBOARD-16n3. EURBOARD-17 Bala Workstation 106 Working 13 Google Coral USB Edge TPU ML Accelerator coprocessor (EURPRO-01)nnIncludes:n1. Google Coral USB Edge TPU ML Accelerator coprocessorn2. Type-C USB cable Bala Workstation 106 Working 14 2 - Robocraze USB 2.0 Mini Microphone for Raspberry Pi 3, 2 Module B & Rpi 1 Model B+/B Laptop Desktop Pcs Skype Voip Voice Recognition Softwarenn1. EURRCR-01n2. EURRCR-02 Bala Workstation 106 Working Devices for TinyML S.No Device Owner Current user Location Comments 1. Arduino Tiny Machine Learning KitArduino Nano 33 BLE Sense board9 axis inertial sensor: what makes this board ideal for wearable deviceshumidity, and temperature sensor: to get highly accurate measurements of the environmental conditionsbarometric sensor: you could make a simple weather stationmicrophone: to capture and analyse sound in real timegesture, proximity, light color and light intensity sensor : estimate the room’s luminosity, but also whether someone is moving close to the board1 OV7675 Camera1 Arduino Tiny Machine Learning Shield1 USB A to Micro USB Cable Bala Bharath Workstation 110 Working device 2 ESP32-CAM Wi-Fi Bluetooth Development Board DC 5V Dual-core Wireless with OV2640 Camera TF Card Module Bala Bharath Workstation 110 Working device 3 Breadboard 840 point with jumper wires Set (40+40+40) Bala Workstation 91 Working device 4 OV7670 Camera image sensor Bala Bharath Workstation 110 Working device Devices for SYNAPTICS AIML S.No Device Owner Current user Location Comments 1. SR110 A0(Label: 35)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cablen Bala Workstation 91 Working device 2. SR110 A0(Label: 4)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cable Bala Workstation 91 Working device 3. SR110 A0(Label: 3)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB C type cablen 4. USB B type cable Bala Workstation 91 Need to flash image using JTAG 4. SR110 A0(Label: 2)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. USB B type cable Bala Workstation 91 Currently in not working state 5. VS680_EVK (S/N: 2021030504)nnIncludes:n 1. Power adaptern 2. Mini USB B cablen 3. SPI flashn 4. Camera interface boardn 5. JTAG interface board Bala Workstation 106 All the devices mentioned are placed inside a single box 6. J-Link (S/N: 602007378) Bala Workstation 91 Working device 7. Wipro Garnet B22 Smart LED Bulb Bala Workstation 91 Working device 8. 7 port USB hub Bala Workstation 91 Working device 9. Lenovo laptop (EURLTP - 275)n→ Windows 11 pro Bala Jeyanth Workstation 106 Working device and have USB access. 10. Lenovo laptopn(EURLTP - 256)n→ Ubuntu 20.04.6 Bala Lab B Working device 11. NUCLEO-144 STM32L4R5ZI Evaluation Board (P/N: 497-17668-ND)n→ STM32L4R5 Nucleo-144 STM32L4 ARM® Cortex®-M4 MCU 32-Bit Embedded Evaluation Boardn Bala Workstation 91 Working device 12. STM32 NUCLEO EXPANSION BOARD POW ((P/N: 497-17665-ND) Bala Workstation 91 Working device 13. Arduino UNO R3 board with DIP ATmega328P Bala Workstation 91 Working device 14. I2C Bi-Directional Logic Level Converter-4 Channel Bala Workstation 91 Working device 15. 7 Port USB hub Bala Jeyanth Workstation 91 Working device 16. Logitech BCC950 conference cam (EURVCAM-1) Bala Bala Workstation 106 Working device 17. 7 Port USB hub Bala Bala Workstation 106 Working device 18. Samsung monitor (EURMTR006) Bala Bala Workstation 106 Working device 19. Wired keyboard (11AZFITLET) and mouse Bala Bala Workstation 106 Working device 20. SL1680 EVK (S/N: K588000055)nnIncludes:n 1. USB C cable (C plugs on both ends)n 2. AC Power adaptern 3. AC plug adaptern 4. USB-A to UART bridge Bala Bala Workstation 106 Working device 21. SL1680 EVK (S/N: K588000042)nnIncludes:n 1. USB C cable (C plugs on both ends)n 2. AC Power adaptern 3. AC plug adaptern 4. USB-A to UART bridge Bala Jeyanth Workstation 91 Working device 22. Ethernet cable Bala Bala Workstation 106 Working device 23. 8- C type USB cablesnn1. EURCABLE-11n2. EURCABLE-12n3. EURCABLE-13n4. EURCABLE-14n5. EURCABLE-15n6. EURCABLE-16n7. EURCABLE-17n8. EURCABLE-18 Bala nnnJeyanthnnnnBala Workstation 91 Working device 24. Sabre Evaluation Board (S/N: AOI6059561)nnIncludes:n 1. C type USB cablen 2. B type USB cable Bala Workstation 106 Working device 25. STM32N6570-DK n(EDK32N6570)nnIncludes:n1. AI camera module (MB1854B)n2. Display (MB1860B)n3. STM32N6570 and a daughter board (C240400148) Bala Workstation 106 Working device 26. SR110 B0(Label: 8)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. 2 - USB C type cable Bala Workstation 106 Working device 27. SR110 B0(Label: 9)nnIncludes:n 1. UART bridgen 2. JTAG connectorn 3. 2 - USB C type cable Bala Workstation 106 Working device ESP32-S3 Devices S.No. Device Owner Current user Location Comments 1. ESP32-S3 Boardn(EURESP-01)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device 2. ESP32-S3 Boardn(EURESP-02)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device 3. ESP32-S3 Boardn(EURESP-03)nnIncludes:n 1. Power adaptern 2. 1-C type USB cable (white) Bala Workstation 91 Working device
Model Zoo Plan
Plan: The development of Model Zoo will be planned in three phases as given below: Phase 1 Collect details regarding the models that we have and then build a ui with the details Phase 2 Make the model available for download and use the model in edge device. Phase 3 Building a PaaS like Edge Impulse Phase 1: Goal: To identify use cases and relevant models and to present the metrics of the models with the use cases Flow: The above flow chart depicts the development of model zoo in phase one. 1) IDENTIFY THE USE CASES: The initial step focuses on gathering different use cases. The use cases can be related to edge or TinyML based like WiFi , telcom and Retail usecases etc. Here are some of the identified use cases: - 1. Face Matching 2. Image Segmentation 3. Object Detection & Identification 4. Gesture Detection 5. Pose Estimation A survey (Aidan Smith) of different model-zoo instances by different chip set vendors is available here for reference: AI-ML Model Table.xlsx 5754169 We are open for suggestions; hence you can add any use case related to TinyML in here. 2) IDENTIFY MODELS FOR EACH GIVEN USE CASE: The next step is to identify the models which can be used in implementing the given use case. The model selection depends on multiple factors like: - 1. Model Size: TinyML applications require models that are lightweight and can fit within the constraints of the target device's memory and storage. The model size should be small enough to ensure efficient execution on resource-constrained hardware. 2. Computational Complexity: The model's computational complexity determines the inference time required for making predictions on the device. TinyML models should have low computational overhead to ensure real-time or near-real-time performance on the target hardware. 3. Accuracy: While model size and computational complexity are important, the model must also provide sufficient accuracy for the intended application. The selected model should achieve the desired level of performance while still meeting the constraints of the target device. 4. Data Requirements: The availability and characteristics of the training and inference data influence the choice of the model architecture and its ability to generalize to unseen examples. TinyML models should be trained on representative datasets that capture the variability and complexity of real-world inputs. 5. Latency and Throughput Requirements: Depending on the application, latency (response time) and throughput (number of inferences per second) requirements may vary. The selected model should meet these performance metrics while operating within the resource constraints of the target device. Additional models can be added in here 3) IDENTIFY METRICS FOR EACH MODEL AND DOCUMENTATION: Collecting metrics of models for embedded devices in TinyML applications involves evaluating various aspects of model performance, resource utilization, and behavior on the target hardware. We can evaluate the model metrics using platforms. 1. MLPerf: It is a benchmark suite for evaluating the performance of machine learning systems, including embedded devices. It provides standardized benchmarks across various tasks and platforms to measure inference time, accuracy, and energy consumption. 2. TensorFlow Lite Micro: It is a framework for deploying machine learning models on microcontrollers. It offers tools for profiling model performance, including inference time and memory usage, to optimize models for embedded devices. 3. Arduino Performance Monitor: A tool for profiling the performance of machine learning models deployed on Arduino boards. It helps developers measure inference time, memory usage, and other metrics to optimize model performance for resource-constrained environments. 4. Edge Impulse: Platform for developing and deploying embedded machine learning models. It provides tools for collecting and analyzing data, training models, and evaluating performance metrics such as accuracy, latency, and energy consumption on edge devices. 4) EXECUTION OF MODEL AND LOGGING METRICS / LAUNCH: The metrics identified from the above steps will be collected by executing the identified models. Also the metrics will be logged. The final step is to execute the trained model and convert the whole model resources into an artifact. These artifacts are then made available in the UI of web application and people can use the models in their personal projects or company projects. * The artifacts include all the required necessities which are used to run the model in any user’s local environment without causing any error. Milestones Description ETA 1. Model Zoo development environment with draft site in AWS. n2. APIs for model details with filtersn3. Wireframe for Model details page in Figman4. Model metrics collection for FaceNet-512 and FaceNet - 128 5th April 2024 1. Data model confirmation for Model metricsn2. UI for model list pagen3. Wireframe for model metrics page in Figman4. Model metrics for YoloV8 -Detection , YoloV8 - Segmentation and YoloV8 - Pose 12th April 2024 1. API for model metrics informationn2. UI for model details pagen3. Finalization of wireframen4. Model metrics for YolACT , YoloV5-Detection , YoloV5- Segmentation and YoloV5 - Pose 19th April 2024 1. Model metrics basic info UIn2. Basic unit testing 26th April 2024 1. Website testing with available modelsn2. Model metrics detailed info UI 3rd May 2024
###Human:tell me about Model Zoo Plan ###assistant: Plan: The development of Model Zoo will be planned in three phases as given below: Phase 1 Collect details regarding the models that we have and then build a ui with the details Phase 2 Make the model available for download and use the model in edge device. Phase 3 Building a PaaS like Edge Impulse Phase 1: Goal: To identify use cases and relevant models and to present the metrics of the models with the use cases Flow: The above flow chart depicts the development of model zoo in phase one. 1) IDENTIFY THE USE CASES: The initial step focuses on gathering different use cases. The use cases can be related to edge or TinyML based like WiFi , telcom and Retail usecases etc. Here are some of the identified use cases: - 1. Face Matching 2. Image Segmentation 3. Object Detection & Identification 4. Gesture Detection 5. Pose Estimation A survey (Aidan Smith) of different model-zoo instances by different chip set vendors is available here for reference: AI-ML Model Table.xlsx 5754169 We are open for suggestions; hence you can add any use case related to TinyML in here. 2) IDENTIFY MODELS FOR EACH GIVEN USE CASE: The next step is to identify the models which can be used in implementing the given use case. The model selection depends on multiple factors like: - 1. Model Size: TinyML applications require models that are lightweight and can fit within the constraints of the target device's memory and storage. The model size should be small enough to ensure efficient execution on resource-constrained hardware. 2. Computational Complexity: The model's computational complexity determines the inference time required for making predictions on the device. TinyML models should have low computational overhead to ensure real-time or near-real-time performance on the target hardware. 3. Accuracy: While model size and computational complexity are important, the model must also provide sufficient accuracy for the intended application. The selected model should achieve the desired level of performance while still meeting the constraints of the target device. 4. Data Requirements: The availability and characteristics of the training and inference data influence the choice of the model architecture and its ability to generalize to unseen examples. TinyML models should be trained on representative datasets that capture the variability and complexity of real-world inputs. 5. Latency and Throughput Requirements: Depending on the application, latency (response time) and throughput (number of inferences per second) requirements may vary. The selected model should meet these performance metrics while operating within the resource constraints of the target device. Additional models can be added in here 3) IDENTIFY METRICS FOR EACH MODEL AND DOCUMENTATION: Collecting metrics of models for embedded devices in TinyML applications involves evaluating various aspects of model performance, resource utilization, and behavior on the target hardware. We can evaluate the model metrics using platforms. 1. MLPerf: It is a benchmark suite for evaluating the performance of machine learning systems, including embedded devices. It provides standardized benchmarks across various tasks and platforms to measure inference time, accuracy, and energy consumption. 2. TensorFlow Lite Micro: It is a framework for deploying machine learning models on microcontrollers. It offers tools for profiling model performance, including inference time and memory usage, to optimize models for embedded devices. 3. Arduino Performance Monitor: A tool for profiling the performance of machine learning models deployed on Arduino boards. It helps developers measure inference time, memory usage, and other metrics to optimize model performance for resource-constrained environments. 4. Edge Impulse: Platform for developing and deploying embedded machine learning models. It provides tools for collecting and analyzing data, training models, and evaluating performance metrics such as accuracy, latency, and energy consumption on edge devices. 4) EXECUTION OF MODEL AND LOGGING METRICS / LAUNCH: The metrics identified from the above steps will be collected by executing the identified models. Also the metrics will be logged. The final step is to execute the trained model and convert the whole model resources into an artifact. These artifacts are then made available in the UI of web application and people can use the models in their personal projects or company projects. * The artifacts include all the required necessities which are used to run the model in any user’s local environment without causing any error. Milestones Description ETA 1. Model Zoo development environment with draft site in AWS. n2. APIs for model details with filtersn3. Wireframe for Model details page in Figman4. Model metrics collection for FaceNet-512 and FaceNet - 128 5th April 2024 1. Data model confirmation for Model metricsn2. UI for model list pagen3. Wireframe for model metrics page in Figman4. Model metrics for YoloV8 -Detection , YoloV8 - Segmentation and YoloV8 - Pose 12th April 2024 1. API for model metrics informationn2. UI for model details pagen3. Finalization of wireframen4. Model metrics for YolACT , YoloV5-Detection , YoloV5- Segmentation and YoloV5 - Pose 19th April 2024 1. Model metrics basic info UIn2. Basic unit testing 26th April 2024 1. Website testing with available modelsn2. Model metrics detailed info UI 3rd May 2024
Recommendation AI Model
Architecture Machine Learning Model (Source) This Model focuses on developing a CatBoost machine learning model to forecast whether a customer will initiate a call, leveraging their access point's runtime logs. The model is trained on a dataset comprising a set of distinct fields. By analyzing runtime logs, the model aims to discern patterns indicative of call initiation behavior. The extensive dataset ensures a comprehensive exploration of potential predictors. File Structure model.py: This file employs pickle to manage the CatBoost model, enabling call-in predictions (binary 0 or 1). It also extracts the top 5 field indexes along with their SHAP scores, offering insights into the prediction rationale. This approach facilitates interpretability and understanding of the model's reasoning process to certain extent. main.py: The main script serves as a singular hub, employing Flask to orchestrate model operations. It accepts data in JSON format via a POST method, furnishing parsed and indexed model outputs. Additionally, it correlates the results with top field indexes retrieved from a local dictionary, enhancing result interpretation and usability. Model Files model.pkl: This file contains the serialized CatBoost model. Test and Prototyping Notebooks: Jupyter notebooks for testing and prototyping purposes are available in the repository. * This has results, outcomes and analysis of different ML models with a variety of data points accustomed at different points in time. How to Run To initiate a local development Flask server, run the following command: Sample Payload using cURL You can test the service using cURL with the following sample payload: Sample Response
###Human:tell me about Recommendation AI Model ###assistant: Architecture Machine Learning Model (Source) This Model focuses on developing a CatBoost machine learning model to forecast whether a customer will initiate a call, leveraging their access point's runtime logs. The model is trained on a dataset comprising a set of distinct fields. By analyzing runtime logs, the model aims to discern patterns indicative of call initiation behavior. The extensive dataset ensures a comprehensive exploration of potential predictors. File Structure model.py: This file employs pickle to manage the CatBoost model, enabling call-in predictions (binary 0 or 1). It also extracts the top 5 field indexes along with their SHAP scores, offering insights into the prediction rationale. This approach facilitates interpretability and understanding of the model's reasoning process to certain extent. main.py: The main script serves as a singular hub, employing Flask to orchestrate model operations. It accepts data in JSON format via a POST method, furnishing parsed and indexed model outputs. Additionally, it correlates the results with top field indexes retrieved from a local dictionary, enhancing result interpretation and usability. Model Files model.pkl: This file contains the serialized CatBoost model. Test and Prototyping Notebooks: Jupyter notebooks for testing and prototyping purposes are available in the repository. * This has results, outcomes and analysis of different ML models with a variety of data points accustomed at different points in time. How to Run To initiate a local development Flask server, run the following command: Sample Payload using cURL You can test the service using cURL with the following sample payload: Sample Response
Gstreamer
GStreamer is an open-source software framework for handling multimedia like audio and video. It's like a set of building blocks that developers can use to create applications that work with multimedia. Here's a breakdown of what GStreamer does: Building blocks: GStreamer is based on small software components called elements. These elements can handle specific tasks, like reading a video file, encoding audio, or displaying video on the screen. Pipeline concept: By connecting these elements together in a specific order, developers can create pipelines that perform complex multimedia tasks. Imagine building a pipeline where one element reads a video file, another converts it to a different format, and a third one displays the converted video on your screen. Flexibility: The beauty of GStreamer is its flexibility. Developers can easily swap elements in and out of the pipeline depending on the specific needs of their application. This allows them to handle a wide variety of multimedia formats and processing tasks. Here are some real-world applications of GStreamer: Media players: Many media players use GStreamer behind the scenes to handle different video and audio formats. Streaming applications: GStreamer can be used to build applications for streaming audio and video over networks. Video editing: GStreamer can be a powerful tool for building video editing applications. Overall, GStreamer is a versatile framework that gives developers a lot of control over how they handle multimedia data. Sample pipelines for various applications Stream the video from live capturing Gst-ai application: Astra machina has a sample application called gst-ai which runs machine learning model over video frames (either static video or camera input). In sdk v0.9.0 the gst-ai application works for image classification over a static video. The command for running this application is The pipeline used in this application is Reading the video: filesrc location=test_file.mp4: This section uses a element to read the video file named "test_file.mp4". Decoding the video: ! decodebin: The decoded video stream is then fed into a element. This element automatically detects the video format and uses the appropriate decoder to convert the compressed video data into a raw format. Video conversion: ! videoconvert: This element might be for optional format conversion. Without knowing the specific output of the decoder, it's difficult to say for sure. It might be converting the raw video data to a specific format for further processing. Splitting the video stream (branching): ! tee name=t_data: This is a crucial part of the pipeline. A element is used to duplicate the video stream. The element is named "t_data" here. This creates a branching point where the video data is sent to two separate processing paths. This path takes the video data from the "t_data" branch. queue: This element might be used for buffering the video data to handle fluctuations in processing speed. videoconvert: This element likely converts the video data to RGB format, which is a common format for image processing tasks. videoscale: This element resizes the video frame to a width of 224 pixels and a height of 224 pixels. This is because the image classification model runs for 224x224 images only. This part is nothing but the preprocessing of video frames before running inference nnThis part gives the path for the model which is read from the paramfile(.json). The mode classifier specifies the post processing mode to interpret the results and the synap_sink is the name of the appsink which is used to capture the results from machine learning model and do post processing. This path runs the overlaying of results from synap_sink over the video frames. For image classification the results will be visualized as labels at the bottom of the video. Textoverlay is used for this application, and it is accessed by the name ic_label. Finally, a waylandsink fullscreen=true element displays the video with the overlaid text on the screen. The option specifies that the video should be displayed in full screen mode. Cairo overlay A Cairo overlay refers to a technique for overlaying graphical elements (text, shapes, images) on top of a video stream using the Cairo graphics library. Here's a breakdown of the concept: Components Involved: 1. GStreamer Pipeline: This is the core framework that handles video data processing. It provides elements for capturing video from webcams, files, or network streams, decoding compressed video, and displaying the video on the screen. 2. Cairo Library: This is a powerful 2D graphics library that allows you to draw various shapes, text, and images onto a surface. 3. Cairo Overlay Element: This is a specific GStreamer element (often referred to as or similar depending on the specific plugin implementation) that bridges the gap between GStreamer and Cairo. It allows you to integrate Cairo drawing functionalities within your GStreamer pipeline. Working Principle: 1. The GStreamer pipeline captures or receives a video stream. 2. The Cairo overlay element is inserted into the pipeline at a specific point where you want to add the overlay. 3. When a video frame arrives at the Cairo overlay element, two signals are emitted: "prepare": This signal provides an opportunity to set up the Cairo context for drawing on the current video frame. You can define fonts, colors, and other graphical properties here. "draw": This signal is emitted for each video frame. It provides a Cairo context and the timestamp of the frame. You can use the Cairo API to draw your desired overlay elements (text, shapes, images) onto the video frame within this callback function. Benefits of Using Cairo Overlays: Flexibility: You can create dynamic overlays that change based on the video content or user interaction. Customization: Cairo provides a rich set of functionalities for drawing various graphical elements, allowing for diverse overlay designs. Integration: It seamlessly integrates Cairo's drawing capabilities within your GStreamer video processing pipeline.
###Human:tell me about Gstreamer ###assistant: GStreamer is an open-source software framework for handling multimedia like audio and video. It's like a set of building blocks that developers can use to create applications that work with multimedia. Here's a breakdown of what GStreamer does: Building blocks: GStreamer is based on small software components called elements. These elements can handle specific tasks, like reading a video file, encoding audio, or displaying video on the screen. Pipeline concept: By connecting these elements together in a specific order, developers can create pipelines that perform complex multimedia tasks. Imagine building a pipeline where one element reads a video file, another converts it to a different format, and a third one displays the converted video on your screen. Flexibility: The beauty of GStreamer is its flexibility. Developers can easily swap elements in and out of the pipeline depending on the specific needs of their application. This allows them to handle a wide variety of multimedia formats and processing tasks. Here are some real-world applications of GStreamer: Media players: Many media players use GStreamer behind the scenes to handle different video and audio formats. Streaming applications: GStreamer can be used to build applications for streaming audio and video over networks. Video editing: GStreamer can be a powerful tool for building video editing applications. Overall, GStreamer is a versatile framework that gives developers a lot of control over how they handle multimedia data. Sample pipelines for various applications Stream the video from live capturing Gst-ai application: Astra machina has a sample application called gst-ai which runs machine learning model over video frames (either static video or camera input). In sdk v0.9.0 the gst-ai application works for image classification over a static video. The command for running this application is The pipeline used in this application is Reading the video: filesrc location=test_file.mp4: This section uses a element to read the video file named "test_file.mp4". Decoding the video: ! decodebin: The decoded video stream is then fed into a element. This element automatically detects the video format and uses the appropriate decoder to convert the compressed video data into a raw format. Video conversion: ! videoconvert: This element might be for optional format conversion. Without knowing the specific output of the decoder, it's difficult to say for sure. It might be converting the raw video data to a specific format for further processing. Splitting the video stream (branching): ! tee name=t_data: This is a crucial part of the pipeline. A element is used to duplicate the video stream. The element is named "t_data" here. This creates a branching point where the video data is sent to two separate processing paths. This path takes the video data from the "t_data" branch. queue: This element might be used for buffering the video data to handle fluctuations in processing speed. videoconvert: This element likely converts the video data to RGB format, which is a common format for image processing tasks. videoscale: This element resizes the video frame to a width of 224 pixels and a height of 224 pixels. This is because the image classification model runs for 224x224 images only. This part is nothing but the preprocessing of video frames before running inference nnThis part gives the path for the model which is read from the paramfile(.json). The mode classifier specifies the post processing mode to interpret the results and the synap_sink is the name of the appsink which is used to capture the results from machine learning model and do post processing. This path runs the overlaying of results from synap_sink over the video frames. For image classification the results will be visualized as labels at the bottom of the video. Textoverlay is used for this application, and it is accessed by the name ic_label. Finally, a waylandsink fullscreen=true element displays the video with the overlaid text on the screen. The option specifies that the video should be displayed in full screen mode. Cairo overlay A Cairo overlay refers to a technique for overlaying graphical elements (text, shapes, images) on top of a video stream using the Cairo graphics library. Here's a breakdown of the concept: Components Involved: 1. GStreamer Pipeline: This is the core framework that handles video data processing. It provides elements for capturing video from webcams, files, or network streams, decoding compressed video, and displaying the video on the screen. 2. Cairo Library: This is a powerful 2D graphics library that allows you to draw various shapes, text, and images onto a surface. 3. Cairo Overlay Element: This is a specific GStreamer element (often referred to as or similar depending on the specific plugin implementation) that bridges the gap between GStreamer and Cairo. It allows you to integrate Cairo drawing functionalities within your GStreamer pipeline. Working Principle: 1. The GStreamer pipeline captures or receives a video stream. 2. The Cairo overlay element is inserted into the pipeline at a specific point where you want to add the overlay. 3. When a video frame arrives at the Cairo overlay element, two signals are emitted: "prepare": This signal provides an opportunity to set up the Cairo context for drawing on the current video frame. You can define fonts, colors, and other graphical properties here. "draw": This signal is emitted for each video frame. It provides a Cairo context and the timestamp of the frame. You can use the Cairo API to draw your desired overlay elements (text, shapes, images) onto the video frame within this callback function. Benefits of Using Cairo Overlays: Flexibility: You can create dynamic overlays that change based on the video content or user interaction. Customization: Cairo provides a rich set of functionalities for drawing various graphical elements, allowing for diverse overlay designs. Integration: It seamlessly integrates Cairo's drawing capabilities within your GStreamer video processing pipeline.
Driver Distraction System
Usecase Description: Classifying different driver actions to find out driver is distracted or driving safely. Model used: InceptionResnetV2 Hardware Support: Raspberry PI PI Camera Base Model Github Link: Gitlab Link to Repo and Steps to Train: Backbone Research Paper: Dataset Link: Optimizations Done: 1. Performed normalizing of pixel values using preprocess technique specific to inceptionresnetv2, which applies mean subtraction and scaling to the pixel values based on the ImageNet dataset's mean and standard deviation. This preprocessing aligns the input data more closely with the training data used to train InceptionResNetV2. 2. Performed data augmentation to cover different brigthness, rotation and sizes 3. Implemented Early stopping to prevent model from overfitting 4. added a learning rate scheduler which can lead to more stable training, faster convergence, better generalization and Robustness to Hyperparameters. 5. performed transfer learning using inceptionresnetv2 where made changes in architecture of nn which is freezed initial layers and added an global_average_pooling layer and a dense layer with number of neurons equal to number of classes of use case. Download Formats Available: Tflite () Input and Output Images: Input Image: Output Image: JSON of the details (For use for pushing into DB)**
###Human:tell me about Driver Distraction System ###assistant: Usecase Description: Classifying different driver actions to find out driver is distracted or driving safely. Model used: InceptionResnetV2 Hardware Support: Raspberry PI PI Camera Base Model Github Link: Gitlab Link to Repo and Steps to Train: Backbone Research Paper: Dataset Link: Optimizations Done: 1. Performed normalizing of pixel values using preprocess technique specific to inceptionresnetv2, which applies mean subtraction and scaling to the pixel values based on the ImageNet dataset's mean and standard deviation. This preprocessing aligns the input data more closely with the training data used to train InceptionResNetV2. 2. Performed data augmentation to cover different brigthness, rotation and sizes 3. Implemented Early stopping to prevent model from overfitting 4. added a learning rate scheduler which can lead to more stable training, faster convergence, better generalization and Robustness to Hyperparameters. 5. performed transfer learning using inceptionresnetv2 where made changes in architecture of nn which is freezed initial layers and added an global_average_pooling layer and a dense layer with number of neurons equal to number of classes of use case. Download Formats Available: Tflite () Input and Output Images: Input Image: Output Image: JSON of the details (For use for pushing into DB)**
Gen AI
What is Gen AI? Types of Gen AI? What have we done? Real Time integration? How could this benefit us? How feasible is this? Phi 3 Llama Localllm OpenElm Public DataPoints Building a own model
###Human:tell me about Gen AI ###assistant: What is Gen AI? Types of Gen AI? What have we done? Real Time integration? How could this benefit us? How feasible is this? Phi 3 Llama Localllm OpenElm Public DataPoints Building a own model
Build an image V1.0.0
Reference link: https://synaptics-astra.github.io/doc/v/0.9.0/yocto.htmlhow-to-build-an-image (Not necessary - only if you’ve got additional build env, spin-up and use Docker container) To start the container, use the following command line: In Build-Server clone repository with following command: current branch - ==v1.0.0== Then, following commands to build an Image: Use in ==screen== session, due to time duration (~3 hours for Fresh Build (Complete)) ![accept the conditions with [space]](/api/attachments.redirect?id=7c9d3f53-1eb1-4d3f-844c-14857ee1bcd0 " =640x400") * Resulting ==Image== is generated in:
###Human:tell me about Build an image V1.0.0 ###assistant: Reference link: https://synaptics-astra.github.io/doc/v/0.9.0/yocto.htmlhow-to-build-an-image (Not necessary - only if you’ve got additional build env, spin-up and use Docker container) To start the container, use the following command line: In Build-Server clone repository with following command: current branch - ==v1.0.0== Then, following commands to build an Image: Use in ==screen== session, due to time duration (~3 hours for Fresh Build (Complete)) ![accept the conditions with [space]](/api/attachments.redirect?id=7c9d3f53-1eb1-4d3f-844c-14857ee1bcd0 " =640x400") * Resulting ==Image== is generated in:
[UI] Location widget
==Widget ID : 4010== Wireframe (Device info page): ==Component - Location Widget== HTML [location-widget.component.html] Style [location-widget.component.scss] The following are the different properties used in the location-widget: Property Function @media It is used for the responsive web design for different devices. !important This rule in CSS is used to add more importance to a property/value than normal. padding-top This property is to set the top padding(space) of an element. margin-top This property is to set the top margin of an element. flex-direction It is to set the direction of the flexible items font-weight It sets the size of the font [ bold/bolder/normal/ 100-900 ] border-radius This property defines the radius of the element’s corner. Typescript [location-widget.component.ts] ==Component - Tile:== HTML [tile.component.html] Typescript [tile.component.ts] Disabled for 5mins
###Human:tell me about [UI] Location widget ###assistant: ==Widget ID : 4010== Wireframe (Device info page): ==Component - Location Widget== HTML [location-widget.component.html] Style [location-widget.component.scss] The following are the different properties used in the location-widget: Property Function @media It is used for the responsive web design for different devices. !important This rule in CSS is used to add more importance to a property/value than normal. padding-top This property is to set the top padding(space) of an element. margin-top This property is to set the top margin of an element. flex-direction It is to set the direction of the flexible items font-weight It sets the size of the font [ bold/bolder/normal/ 100-900 ] border-radius This property defines the radius of the element’s corner. Typescript [location-widget.component.ts] ==Component - Tile:== HTML [tile.component.html] Typescript [tile.component.ts] Disabled for 5mins
Software Development Lifecycle - Security Guidelines
The focus is on ensuring code security by identifying and mitigating risks in project dependencies and codebases. NodeJS/JavaScript Vulnerability Scanning Tools to Use: npm audit: Built-in tool for scanning Node.js dependencies. Retire.js: Detects vulnerabilities in frontend JavaScript libraries. OWASP Dependency-Check: Identifies known vulnerabilities in project dependencies. Setup and Usage: 1. npm audit: Install: No installation needed (integrated with npm). Run: Review the report and address any vulnerabilities. 2. Retire.js: Install: Run: Document and fix any vulnerabilities found. 3. OWASP Dependency-Check: Download from OWASP Dependency-Check. Configure and run using CLI. Analyze the results and take appropriate actions. Automated Scanning for the Astral REST Server Tools to Use: OWASP ZAP: For dynamic analysis. SonarQube Community Edition: For static code analysis. Setup and Integration: 1. OWASP ZAP: Install OWASP ZAP. Integrate into GitLab CI/CD pipeline: Schedule regular scans and review the reports. 2. SonarQube: Install and configure SonarQube. Integrate with the GitLab CI/CD pipeline for static analysis: Regularly review SonarQube reports and address issues. Automated Scanning for the Astral WEB Server Tools to Use: OWASP ZAP: For dynamic web application security testing. Nikto: For web server scanning. Setup and Integration: 1. OWASP ZAP: Same setup as the REST server. 2. Nikto: Install: Run: Integrate with GitLab CI/CD for regular scanning and reporting: Golang Vulnerability Scanning Tools to Use: Gosec (Go Security Checker): For security scanning. Setup and Usage: 1. Gosec: Install: Run: Document and remediate vulnerabilities found. Automated Scanning for Golang Projects Tools to Use: Gosec Setup and Integration: 1. Gosec: Integrate into GitLab CI/CD pipeline: Configure to run on code commits, merges, or scheduled intervals. Monitor and fix issues based on the scan results. OWASP Dep-Scan OWASP Dep-Scan is a next-generation security and risk audit tool based on known vulnerabilities, advisories, and license limitations for project dependencies. Both local repositories and container images are supported, making it ideal for CI environments. Features: Scans most application code, local repos, Linux container images, Kubernetes manifests, and OS packages. Performs advanced reachability analysis for multiple languages. Fast, local package vulnerability scanning without using a server. Generates Software Bill-of-Materials (SBOM) with Vulnerability Disclosure Report (VDR) information. Creates a Common Security Advisory Framework (CSAF) 2.0 VEX document. Conducts deep package risk audits for dependency confusion attacks and maintenance risks. Usage: Dep-scan is ideal for use during continuous integration (CI) and as a local development tool. Installation: Using OCI Artifacts via ORAS CLI: Single Binary Executables: Configuring GitLab CI/CD: Add the following job to your file to integrate OWASP Dep-Scan into your pipeline: Running Dep-Scan: Dep-Scan can be run in various modes depending on your requirements. Here are some common commands: Scanning a Local Directory: Server Mode: In server mode, use the following commands: Cache the vulnerability database: Scan a local directory: Additional Options and Customization: Dep-Scan offers various customization options through environment variables and command-line arguments. For example, to enable risk audits and use a specific private namespace:
###Human:tell me about Software Development Lifecycle - Security Guidelines ###assistant: The focus is on ensuring code security by identifying and mitigating risks in project dependencies and codebases. NodeJS/JavaScript Vulnerability Scanning Tools to Use: npm audit: Built-in tool for scanning Node.js dependencies. Retire.js: Detects vulnerabilities in frontend JavaScript libraries. OWASP Dependency-Check: Identifies known vulnerabilities in project dependencies. Setup and Usage: 1. npm audit: Install: No installation needed (integrated with npm). Run: Review the report and address any vulnerabilities. 2. Retire.js: Install: Run: Document and fix any vulnerabilities found. 3. OWASP Dependency-Check: Download from OWASP Dependency-Check. Configure and run using CLI. Analyze the results and take appropriate actions. Automated Scanning for the Astral REST Server Tools to Use: OWASP ZAP: For dynamic analysis. SonarQube Community Edition: For static code analysis. Setup and Integration: 1. OWASP ZAP: Install OWASP ZAP. Integrate into GitLab CI/CD pipeline: Schedule regular scans and review the reports. 2. SonarQube: Install and configure SonarQube. Integrate with the GitLab CI/CD pipeline for static analysis: Regularly review SonarQube reports and address issues. Automated Scanning for the Astral WEB Server Tools to Use: OWASP ZAP: For dynamic web application security testing. Nikto: For web server scanning. Setup and Integration: 1. OWASP ZAP: Same setup as the REST server. 2. Nikto: Install: Run: Integrate with GitLab CI/CD for regular scanning and reporting: Golang Vulnerability Scanning Tools to Use: Gosec (Go Security Checker): For security scanning. Setup and Usage: 1. Gosec: Install: Run: Document and remediate vulnerabilities found. Automated Scanning for Golang Projects Tools to Use: Gosec Setup and Integration: 1. Gosec: Integrate into GitLab CI/CD pipeline: Configure to run on code commits, merges, or scheduled intervals. Monitor and fix issues based on the scan results. OWASP Dep-Scan OWASP Dep-Scan is a next-generation security and risk audit tool based on known vulnerabilities, advisories, and license limitations for project dependencies. Both local repositories and container images are supported, making it ideal for CI environments. Features: Scans most application code, local repos, Linux container images, Kubernetes manifests, and OS packages. Performs advanced reachability analysis for multiple languages. Fast, local package vulnerability scanning without using a server. Generates Software Bill-of-Materials (SBOM) with Vulnerability Disclosure Report (VDR) information. Creates a Common Security Advisory Framework (CSAF) 2.0 VEX document. Conducts deep package risk audits for dependency confusion attacks and maintenance risks. Usage: Dep-scan is ideal for use during continuous integration (CI) and as a local development tool. Installation: Using OCI Artifacts via ORAS CLI: Single Binary Executables: Configuring GitLab CI/CD: Add the following job to your file to integrate OWASP Dep-Scan into your pipeline: Running Dep-Scan: Dep-Scan can be run in various modes depending on your requirements. Here are some common commands: Scanning a Local Directory: Server Mode: In server mode, use the following commands: Cache the vulnerability database: Scan a local directory: Additional Options and Customization: Dep-Scan offers various customization options through environment variables and command-line arguments. For example, to enable risk audits and use a specific private namespace:
User Login API Flow
Flow Diagram:
###Human:tell me about User Login API Flow ###assistant: Flow Diagram:
Weekly Status Report -6th June
Executive Summary: Model Zoo: Worked on setting up the caching and the optimizations for the landing and the models listing pages. Brought out the APIs and the initial page for the model details and the metrics. Worked on bringing up the sign-in pages for the synaptics site with the APIs for the relevant tasks. Setup the pipeline for the SL1680 metrics and started collecting the metrics for the models from SL1680. Collected general model metrics and the usecase model metrics and pushed to the DB. Accomplishments: Dev: AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-398: Planning of DB data model for tenancy support – ==Closed.== AI-408: Plan for metrics gathering in RPI and SL1680 – ==Closed.== AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==Closed.== AI-413: Setup of SL1860 board for executing the models – ==Closed.== AI-452: Benchmark metrics for SL1680 – ==Closed.== AI-356: Backend APIs for landing page with unit testing – ==Closed.== AI-366: Backend APIs for model listing and models search – ==Closed.== AI-420: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-431: Metrics collection of use case - RPI – ==Closed.== AI-454: Curation of model details for each of the selected models - 5 models – ==Closed.== AI-357: UI for Landing Page – ==Closed.== AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-405: Face Matching System - Perform live matching and detection of faces – ==Closed.== AI-406: Metrics collection of use case - RPI – ==Closed.== AI-453- API request memoization and caching data– ==Closed.== AI-377 - Packaging and Bundling of CLI tools with documentation on usage– ==Closed.== AI-375-CLI tool support for deployment into Rapsberry Pi– ==Closed.== AI-393: Collection of model metrics for Synaptics 5 models – ==In Progress.== AI-448: Plan for DB backup using PG backup – ==In Progress.== AI-358:UI for user sign-up / login page with unit testing – ==In Progress.== AI-360: Backend APIs for User sign up and Login unit testing – ==In Progress.== AI-365: APIs for model details, metrics and download options – ==In Progress.== AI-363: UI for model details, metrics and download options – ==In Progress.== AI-368: UI for Other page– ==In Progress== AI-369: APIs for Other pages– ==In Progress== DevOps: AI-387: CI/CD pipeline to push model metrics to DB – ==Closed.== AI-390: Curation of model details for each of the selected models - 5 models - ==Closed.== AI-456: Pipeline for collection of metrics from SL1680 - ==Closed.== AI-381: Deployment pipeline setup of public model zoo site - Prod – ==In Progress.== AI-382: Planning of deployment of model zoos for synaptics - Prod – ==In Progress.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Usecase execution for Obstacle detection and CSI based person detection usecases Collection of models and metrics for RPI and SL1680 for the next set of models. Bug fixing on the UI and the API part. Planning for deployment of the public and synaptics sites. Plan for DB migration for the various instances. PTO and Vacation: Jenani: 03rd June Sanjay: 04th June DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-387: CI/CD pipeline to push model metrics to DB – ==Closed.== Modified the API endpoints based on new db schema in the pipeline. Completed the metrics collection using pipeline. [JENANI SAMPATH] AI-390: Curation of model details for each of the selected models - 5 models - ==Closed.== Completed the metrics collection for 5 models, Yolov8face,wake_word tflite,yolact,Mobilenetv3 large,rtdetr. Collected model details and images for all of the above models and added it in details.txt. Pipeline was successful. [JENANI SAMPATH] AI-456: Pipeline for collection of metrics from SL1680 - ==Closed.== Configured pipeline to collect metrics directly from sl1680 board. Tested the working and updated documentation for the same. [SURAJRAM SRINIVASAN] AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== Setup the pipeline for collecting the metrics using the MLTK profiler. Collected the model metrics and pushed the details of the model to the DB. [SURAJRAM SRINIVASAN] AI-398: Planning of DB data model for tenancy support – ==Closed.== Finalized and planned on the DB data model for the synaptics site. Decided on the adding a new schema for the synaptics site and modified the backend accordingly. [SURAJRAM SRINIVASAN] AI-408: Plan for metrics gathering in RPI and SL1680 – ==Closed.== Explored on the profiler tools for gathering metrics from RPI and SL1680. Finalized on the metrics pipeline and working on the metrics collection from SL1680 for a sample usecase [SURAJRAM SRINIVASAN] AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==Closed.== Identified the different models and the usecases that be brought out for the model zoo site Drafted a table with the general models , the artifacts and the other related details for the general models and sent it out for review. Followed the same steps to get a collection of the usecase models and frame the usecases as well. [SURAJRAM SRINIVASAN] AI-413: Setup of SL1860 board for executing the models – ==Closed.== Installed the necessary packages and setup the SL1680 board for metrics collection . Tested the pipeline and verified the execution of the models. [ANEES ASLAM] AI-452: Benchmark metrics for SL1680 – ==Closed.== Completed and streamlined benchmarking for SL1680 [KEERTHANA SUNDARBABU] AI-356: Backend APIs for landing page with unit testing – ==Closed.== Modified all the landing page APIs based on new Schema New API spec: All the APIS have been modified according to sequelize ORM and deployed to all 3 servers Shared Unit test results for both landing and listing page on jira [KEERTHANA SUNDARBABU] AI-366: Backend APIs for model listing and models searchn – ==Closed.== Search API has been added in the backend /api/models?search= The api fetches the model list based on whether the search param is model_name or usecase_name The search API is also sequelized and tested and deployed. [JAYAPRAKASH MASILAMANI] AI-420: Collection of model metrics for the set of selected models including RPI – ==Closed.== Collected the model metrics for different Yolo models like instance segmentation, object detection, pose estimation, face detection and Fast SAM segmentation through MLTK profiling. Pushed all the model metrics to the pipeline and loaded in the database. [JAYAPRAKASH MASILAMANI] AI-431: Metrics collection of use case - RPI – ==Closed.== Generated the model metrics for Inception Resnetv2 model of Driver Distraction Usecase. Pushed the model metrics via CI pipeline and loaded the same in DB. [JAYAPRAKASH MASILAMANI] AI-454: Curation of model details for each of the selected models - 5 models – ==Closed.== Collected the model metrics for pytorch models such as Alexnet, SqueezeNet, ShuffleNet, MobileNetV3 and ResNet50 through MLTK profiling. Pushed all the model metrics to the pipeline and loaded in the database. [SWATHI RAJAMANI] AI-357: UI for Landing Page – ==Closed.== Updated landing page and search page based on new api changes Added pagination for model listing Added filter functionalities for model filtering. [SHAIK SHUAIB AFNAN] AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== Pushed metrics of ResNet101, ShuffleNetV2 & MobileNetV3 pytorch models into ai-model-zoo repository. [SHAIK SHUAIB AFNAN] AI-405: Face Matching System - Perform live matching and detection of faces – ==Closed.== Pulled the data of face-matching into amazon s3 bucket using dvc and pulled it for using for inference. Wrote script to log the parameters, metrics and artifacts into mlflow. Performed inference of the face matching system using the data pulled from dvc and logged training info into mlflow. Added details on how to perform the inference into readme file. [SHAIK SHUAIB AFNAN] AI-406: Metrics collection of use case - RPI – ==Closed.== Pushed the use-case metrics from raspberry pi for face-matching system into ai-model-zoo repository successfully. The metrcis are collected using the tflite file. [SANJAY RAVI] AI-453- API request memoization and caching data– ==Closed.== Implemented Caching for Home and Models Pages Implemented Animations and Transitions [SANJAY RAVI] AI-377 - Packaging and Bundling of CLI tools with documentation on usage– ==Closed.== Package and bundling of the CLI tool has been done and has been pushed to PyPI. [SANJAY RAVI] AI-375-CLI tool support for deployment into Rapsberry Pi– ==Closed.== CLI Support for deployment in RPI has been completed. JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-381: Deployment pipeline setup of public model zoo site - Prod – ==In Progress.== Created deployment pipeline for each environments dev,qa,release beta and production respectively. Need to update gitlab variables for release beta and prod once setup is ready. [ETA - 12/06] [JENANI SAMPATH] AI-382: Planning of deployment of model zoos for synaptics - Prod – ==In Progress.== Created deployment pipeline for each environments dev,qa,release beta and production respectively. Need to update gitlab variables for release beta and prod once setup is ready. [ETA - 12/06] [SURAJRAM SRINIVASAN] AI-393: Collection of model metrics for Synaptics 5 models – ==In Progress.== Started collecting the metrics for the Synaptics models using the SL1680 pipeline. Working on collecting the metrics for the next set of models [ETA - 07/06 ] [SURAJRAM SRINIVASAN] AI-448: Plan for DB backup using PG backup – ==In Progress.== Tested on using the PG backup and PG restore for backup ofthe DB data. Working on writing a CRON script to trigger a backup on a daily basis. [ETA - 07/06] [KEERTHANA SUNDARBABU] AI-358:UI for user sign-up / login page with unit testingn – ==In Progress.== Working on designing user login page UI Improving the UI and working on the UI flow Working on integrating with backend user authentication using acces token [ETA - 07/06] [JAYAPRAKASH MASILAMANI] AI-360: Backend APIs for User sign up and Login unit testing – ==In Progress.== Created a user pool in AWS Cognito for the User management module for the Synaptics ModelZoo. Completed the implementation of APIs for User SignUp, User Login and User Logout using CognitoServiceProvider module in the backend. Implemented the authentication Middleware for verify the Cognito Access Token using CognitoJwtVerifier module for all the Fetch APIs in the backend. Working on the Unit test implementation of the User login ang logout APIs. [ETA - 07/06 ] [SWATHI RAJAMANI] AI-365: APIs for model details, metrics and download options – ==In Progress.== Added API route for model details and metrics Made schema changes [ETA - 07/06 ] [SWATHI RAJAMANI] AI-363: UI for model details, metrics and download options – ==In Progress.== Updated Model details page from the API references [ETA - 07/06 ] [SANJAY RAVI] AI-368: UI for Other page– ==In Progress== Need of implement UI components for Other Page with Proper Transitions [ETA - 07/06 ] [SANJAY RAVI] AI-369: APIs for Other pages– ==In Progress== Updating API for model zoo Other Pages [ETA - 07/06]
###Human:tell me about Weekly Status Report -6th June ###assistant: Executive Summary: Model Zoo: Worked on setting up the caching and the optimizations for the landing and the models listing pages. Brought out the APIs and the initial page for the model details and the metrics. Worked on bringing up the sign-in pages for the synaptics site with the APIs for the relevant tasks. Setup the pipeline for the SL1680 metrics and started collecting the metrics for the models from SL1680. Collected general model metrics and the usecase model metrics and pushed to the DB. Accomplishments: Dev: AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-398: Planning of DB data model for tenancy support – ==Closed.== AI-408: Plan for metrics gathering in RPI and SL1680 – ==Closed.== AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==Closed.== AI-413: Setup of SL1860 board for executing the models – ==Closed.== AI-452: Benchmark metrics for SL1680 – ==Closed.== AI-356: Backend APIs for landing page with unit testing – ==Closed.== AI-366: Backend APIs for model listing and models search – ==Closed.== AI-420: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-431: Metrics collection of use case - RPI – ==Closed.== AI-454: Curation of model details for each of the selected models - 5 models – ==Closed.== AI-357: UI for Landing Page – ==Closed.== AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== AI-405: Face Matching System - Perform live matching and detection of faces – ==Closed.== AI-406: Metrics collection of use case - RPI – ==Closed.== AI-453- API request memoization and caching data– ==Closed.== AI-377 - Packaging and Bundling of CLI tools with documentation on usage– ==Closed.== AI-375-CLI tool support for deployment into Rapsberry Pi– ==Closed.== AI-393: Collection of model metrics for Synaptics 5 models – ==In Progress.== AI-448: Plan for DB backup using PG backup – ==In Progress.== AI-358:UI for user sign-up / login page with unit testing – ==In Progress.== AI-360: Backend APIs for User sign up and Login unit testing – ==In Progress.== AI-365: APIs for model details, metrics and download options – ==In Progress.== AI-363: UI for model details, metrics and download options – ==In Progress.== AI-368: UI for Other page– ==In Progress== AI-369: APIs for Other pages– ==In Progress== DevOps: AI-387: CI/CD pipeline to push model metrics to DB – ==Closed.== AI-390: Curation of model details for each of the selected models - 5 models - ==Closed.== AI-456: Pipeline for collection of metrics from SL1680 - ==Closed.== AI-381: Deployment pipeline setup of public model zoo site - Prod – ==In Progress.== AI-382: Planning of deployment of model zoos for synaptics - Prod – ==In Progress.== Issues for Escalations/Blockers: NIL Plan for next week: Model Zoo: Usecase execution for Obstacle detection and CSI based person detection usecases Collection of models and metrics for RPI and SL1680 for the next set of models. Bug fixing on the UI and the API part. Planning for deployment of the public and synaptics sites. Plan for DB migration for the various instances. PTO and Vacation: Jenani: 03rd June Sanjay: 04th June DETAILED SUMMARY : JIRA TICKET(S) COMPLETED: [JENANI SAMPATH] AI-387: CI/CD pipeline to push model metrics to DB – ==Closed.== Modified the API endpoints based on new db schema in the pipeline. Completed the metrics collection using pipeline. [JENANI SAMPATH] AI-390: Curation of model details for each of the selected models - 5 models - ==Closed.== Completed the metrics collection for 5 models, Yolov8face,wake_word tflite,yolact,Mobilenetv3 large,rtdetr. Collected model details and images for all of the above models and added it in details.txt. Pipeline was successful. [JENANI SAMPATH] AI-456: Pipeline for collection of metrics from SL1680 - ==Closed.== Configured pipeline to collect metrics directly from sl1680 board. Tested the working and updated documentation for the same. [SURAJRAM SRINIVASAN] AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== Setup the pipeline for collecting the metrics using the MLTK profiler. Collected the model metrics and pushed the details of the model to the DB. [SURAJRAM SRINIVASAN] AI-398: Planning of DB data model for tenancy support – ==Closed.== Finalized and planned on the DB data model for the synaptics site. Decided on the adding a new schema for the synaptics site and modified the backend accordingly. [SURAJRAM SRINIVASAN] AI-408: Plan for metrics gathering in RPI and SL1680 – ==Closed.== Explored on the profiler tools for gathering metrics from RPI and SL1680. Finalized on the metrics pipeline and working on the metrics collection from SL1680 for a sample usecase [SURAJRAM SRINIVASAN] AI-412: Selection of models for Synaptics/Public Model Zoo models finalization – ==Closed.== Identified the different models and the usecases that be brought out for the model zoo site Drafted a table with the general models , the artifacts and the other related details for the general models and sent it out for review. Followed the same steps to get a collection of the usecase models and frame the usecases as well. [SURAJRAM SRINIVASAN] AI-413: Setup of SL1860 board for executing the models – ==Closed.== Installed the necessary packages and setup the SL1680 board for metrics collection . Tested the pipeline and verified the execution of the models. [ANEES ASLAM] AI-452: Benchmark metrics for SL1680 – ==Closed.== Completed and streamlined benchmarking for SL1680 [KEERTHANA SUNDARBABU] AI-356: Backend APIs for landing page with unit testing – ==Closed.== Modified all the landing page APIs based on new Schema New API spec: All the APIS have been modified according to sequelize ORM and deployed to all 3 servers Shared Unit test results for both landing and listing page on jira [KEERTHANA SUNDARBABU] AI-366: Backend APIs for model listing and models searchn – ==Closed.== Search API has been added in the backend /api/models?search= The api fetches the model list based on whether the search param is model_name or usecase_name The search API is also sequelized and tested and deployed. [JAYAPRAKASH MASILAMANI] AI-420: Collection of model metrics for the set of selected models including RPI – ==Closed.== Collected the model metrics for different Yolo models like instance segmentation, object detection, pose estimation, face detection and Fast SAM segmentation through MLTK profiling. Pushed all the model metrics to the pipeline and loaded in the database. [JAYAPRAKASH MASILAMANI] AI-431: Metrics collection of use case - RPI – ==Closed.== Generated the model metrics for Inception Resnetv2 model of Driver Distraction Usecase. Pushed the model metrics via CI pipeline and loaded the same in DB. [JAYAPRAKASH MASILAMANI] AI-454: Curation of model details for each of the selected models - 5 models – ==Closed.== Collected the model metrics for pytorch models such as Alexnet, SqueezeNet, ShuffleNet, MobileNetV3 and ResNet50 through MLTK profiling. Pushed all the model metrics to the pipeline and loaded in the database. [SWATHI RAJAMANI] AI-357: UI for Landing Page – ==Closed.== Updated landing page and search page based on new api changes Added pagination for model listing Added filter functionalities for model filtering. [SHAIK SHUAIB AFNAN] AI-392: Collection of model metrics for the set of selected models including RPI – ==Closed.== Pushed metrics of ResNet101, ShuffleNetV2 & MobileNetV3 pytorch models into ai-model-zoo repository. [SHAIK SHUAIB AFNAN] AI-405: Face Matching System - Perform live matching and detection of faces – ==Closed.== Pulled the data of face-matching into amazon s3 bucket using dvc and pulled it for using for inference. Wrote script to log the parameters, metrics and artifacts into mlflow. Performed inference of the face matching system using the data pulled from dvc and logged training info into mlflow. Added details on how to perform the inference into readme file. [SHAIK SHUAIB AFNAN] AI-406: Metrics collection of use case - RPI – ==Closed.== Pushed the use-case metrics from raspberry pi for face-matching system into ai-model-zoo repository successfully. The metrcis are collected using the tflite file. [SANJAY RAVI] AI-453- API request memoization and caching data– ==Closed.== Implemented Caching for Home and Models Pages Implemented Animations and Transitions [SANJAY RAVI] AI-377 - Packaging and Bundling of CLI tools with documentation on usage– ==Closed.== Package and bundling of the CLI tool has been done and has been pushed to PyPI. [SANJAY RAVI] AI-375-CLI tool support for deployment into Rapsberry Pi– ==Closed.== CLI Support for deployment in RPI has been completed. JIRA TICKET(S) IN PROGRESS: [JENANI SAMPATH] AI-381: Deployment pipeline setup of public model zoo site - Prod – ==In Progress.== Created deployment pipeline for each environments dev,qa,release beta and production respectively. Need to update gitlab variables for release beta and prod once setup is ready. [ETA - 12/06] [JENANI SAMPATH] AI-382: Planning of deployment of model zoos for synaptics - Prod – ==In Progress.== Created deployment pipeline for each environments dev,qa,release beta and production respectively. Need to update gitlab variables for release beta and prod once setup is ready. [ETA - 12/06] [SURAJRAM SRINIVASAN] AI-393: Collection of model metrics for Synaptics 5 models – ==In Progress.== Started collecting the metrics for the Synaptics models using the SL1680 pipeline. Working on collecting the metrics for the next set of models [ETA - 07/06 ] [SURAJRAM SRINIVASAN] AI-448: Plan for DB backup using PG backup – ==In Progress.== Tested on using the PG backup and PG restore for backup ofthe DB data. Working on writing a CRON script to trigger a backup on a daily basis. [ETA - 07/06] [KEERTHANA SUNDARBABU] AI-358:UI for user sign-up / login page with unit testingn – ==In Progress.== Working on designing user login page UI Improving the UI and working on the UI flow Working on integrating with backend user authentication using acces token [ETA - 07/06] [JAYAPRAKASH MASILAMANI] AI-360: Backend APIs for User sign up and Login unit testing – ==In Progress.== Created a user pool in AWS Cognito for the User management module for the Synaptics ModelZoo. Completed the implementation of APIs for User SignUp, User Login and User Logout using CognitoServiceProvider module in the backend. Implemented the authentication Middleware for verify the Cognito Access Token using CognitoJwtVerifier module for all the Fetch APIs in the backend. Working on the Unit test implementation of the User login ang logout APIs. [ETA - 07/06 ] [SWATHI RAJAMANI] AI-365: APIs for model details, metrics and download options – ==In Progress.== Added API route for model details and metrics Made schema changes [ETA - 07/06 ] [SWATHI RAJAMANI] AI-363: UI for model details, metrics and download options – ==In Progress.== Updated Model details page from the API references [ETA - 07/06 ] [SANJAY RAVI] AI-368: UI for Other page– ==In Progress== Need of implement UI components for Other Page with Proper Transitions [ETA - 07/06 ] [SANJAY RAVI] AI-369: APIs for Other pages– ==In Progress== Updating API for model zoo Other Pages [ETA - 07/06]
User Login APIs Flow
Flow Diagram:
###Human:tell me about User Login APIs Flow ###assistant: Flow Diagram:
Model Metrics
The model metrics is updated to db and mlflow using CI/CD pipeline. Repo : ai-model-zoo [] Workflow The CI/CD pipeline consists of two stages generate metrics and upload to mlflow-db respectively. 1. For Model-zoo Public 1. Checkout a branch from updated_metrics branch and name the branch in this format . 2. In that branch edit the details file in the details directory with the relevant details about the model and rename it to your model's name. 3. While adding the model download urls make sure to add all the formats available 4. Add the model file in model directory. 5. Push the changes to trigger the pipeline. 6. Verify the pipeline is successful in gitlab. Gitlab-ci for model metrics collection [model-zoo public] Synaptics sl1680 pipeline for metric collections The CI/CD pipeline consists of two stages generate metrics in sl1680 and upload to mlflow-db respectively. 1. Checkout a branch from sl1680_pipeline branch and name the branch in this format . 2. In that branch edit the details file in the details directory with the relevant details about the model and rename it to your model's name. 3. While adding the model download urls make sure to add all the formats available 4. Add the model file in model directory. 5. Push the changes to trigger the pipeline. 6. Verify the pipeline is successful in gitlab. Gitlab-ci for sl1680 pipeline metrics collection
###Human:tell me about Model Metrics ###assistant: The model metrics is updated to db and mlflow using CI/CD pipeline. Repo : ai-model-zoo [] Workflow The CI/CD pipeline consists of two stages generate metrics and upload to mlflow-db respectively. 1. For Model-zoo Public 1. Checkout a branch from updated_metrics branch and name the branch in this format . 2. In that branch edit the details file in the details directory with the relevant details about the model and rename it to your model's name. 3. While adding the model download urls make sure to add all the formats available 4. Add the model file in model directory. 5. Push the changes to trigger the pipeline. 6. Verify the pipeline is successful in gitlab. Gitlab-ci for model metrics collection [model-zoo public] Synaptics sl1680 pipeline for metric collections The CI/CD pipeline consists of two stages generate metrics in sl1680 and upload to mlflow-db respectively. 1. Checkout a branch from sl1680_pipeline branch and name the branch in this format . 2. In that branch edit the details file in the details directory with the relevant details about the model and rename it to your model's name. 3. While adding the model download urls make sure to add all the formats available 4. Add the model file in model directory. 5. Push the changes to trigger the pipeline. 6. Verify the pipeline is successful in gitlab. Gitlab-ci for sl1680 pipeline metrics collection
Status - Week ending on 31/05
*Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: ecloud repository’s Make file has now been updated to facilitate the new tags and versioning structure. Making it compatible with CICD-Maestro Internal Environments - QA/Dev: A recommendation ML service has been dockerized and deployed on the cloud-devel server for testing and demonstrations. All required fields for the ML model have been well-defined, and a private library has been written in Python to process the JSON input and transform it into a 2D labeled array for the model to make predictions. The JSON output response is now structured and appropriately handles error messages. Create a new Infra with USP-Controller coexisting with Astral services to be Shared with Airtel. Singtel Production: With the help of existing certs in Huawei Production environment, we created temporary domain for REST and WEB service in VMware setup for *“penetration-testing” purpose. Singtel could consume messages for all of the runtime-streaming and push-notification-feature topics from their Kafka brokers. Still, troubleshooting is in progress from their end for why the messages aren’t been consumed through NAT IP but instead via private subnet range. ELK stack has been deployed in VMware Production environment. Airtel Production: For Zyxel & Dp devices the band-steering configuration is not enabled at the profile level. Created a new profile & analytic group to test the band-steering configuration for around 1k Zyxel devices. As we have started logging the reboot count of devices per day, also added it over the weekly report as well - devices should be rebooted at least 10 times a day and should be repeated for at least 3 times over a week. Airtel production cloud has been upgraded to 2.10.0 Developed a Docker image using Python 3.8 and all necessary dependencies. Successfully deployed it to production. Automated script retrieves fingerprint data hourly for 6 hours, then combines those temporary files into a single daily ORC file.n *Tickets information: Closed Tickets Description AIR-3643 Create & train model for providing top 5 recommended action AIR-3533 JSON query processing DEVOPS-1116 Refactor Make file in ecloud SMN-743 Resurrect ELK stack in VMware Singtel Prod AIR-3734 Enable Band-Steering configuration AIR-3735 Add the reboot count of devices DEVOPS-1159 Upgrade Airtel Production cloud to 2.10.0 DEVOPS-1161 New instance for Astral-USP deployment AIR-3737 Docker image for Python packages In-Progress Tickets Description ETA AIR-3533 LLM based resolution 07/06/2024 AIR-3533 Prompt Engine for Recommendation Service 07/06/2024 SMN-739 Kafka Message consumption from VMware Prod 07/06/2024 SMN-742 Multi domain Cert for Singtel Staging (SAN Certs) 07/06/2024 SMN-744 Penetration testing for VMware Production setup 07/06/2024 AIR-3738 Script for Fingerprint report using Spark 6hours and one hour 05/06/2024 PTO & Vacation: Bharath Rajiv A - 31/05/2024 Plan for next week: Recommendation Service To develop a model for recommendation services, we will begin by defining the response and finalizing the service architecture. Next, we will make plans for integrating machine learning models with the service and identify the required parameters. Finally, we will finalize the model, response, and input JSON format. To develop a Large Language Model (LLM) capable of offering solutions to access point problems, we will leverage the Cat Boost model we have already trained. This LLM will utilize the insights of the Cat Boost model to identify and suggest resolutions to issues related to access points. By integrating the strengths of both models, we aim to enhance our ability to troubleshoot and address these problems more effectively and efficiently. Penetration testing and SFTP connection establishment Start with data migration activity, as we’ve created temporary domain for REST and WEB service in VMware setup for penetration testing purpose. Ensure SFTP server connection has been established from backup server by modifying certain firewall and NAT rules. Logic change in monitoring data Instead of sending the monitoring files generated by self-healing services, need to collect the key values from ORC data using spark. Need to only consider the devices which exceeds the threshold more than 50% out of all the data collected in runtime. Reboot Counts Airtel reported for some of the devices the reboot counts on TR and Embedur doesn’t match. So, for some of the devices we will collect the data at regular interval on both sides for further investigation. Detailed Summary: Recommendation Service A recommendation ML service has been successfully dockerized and deployed on the cloud-devel server for testing and demonstration purposes. All required fields for the model have been meticulously defined to ensure accuracy and consistency. A private library, developed in Python, processes the JSON input by transforming it into a 2D labeled array that the model can utilize to generate predictions. The output response from the model is now structured in JSON format, ensuring clarity and ease of interpretation. Additionally, error handling mechanisms have been implemented to manage and report any issues that arise during the prediction process. This service is currently undergoing testing, and active changes and tweaks are being made by the development team. VMware Penetration testing and Kafka Message consumption From existing certs in Huawei Production environment, created temporary domain for REST () and WEB service () for VMware Prod setup. Francheska from Singtel end ensured that all the messages are being consumed from push-notification service and runtime-streaming topics. ELK Stack setup in Singtel VMware Prod Services pertaining to ELK stack setup has been deployed in VMware Production environment. Ensured that the functionality for Elastic search, Log stash, Kibana and file beat works as expected. Band-Steering Created a new analytic group and profile called - "Zyxel_band_steering" with the band steering enabled on RRM configuration. Using REST APIs changed the analytic group of the 1k Zyxel devices. Self-healing weekly report Used the self-healing reports for the past 7 days which contains the devices rebooted at least 10 times a day. Used a python script to fetch the repeated cases over the period of last days. Cloud Upgrade On 30/5/24 Airtel production cloud has been upgraded to 2.10.0. In this release we have done some improvisations on client model. Geo-location service will be deployed once Airtel creates the policy in firewall. Python Package in Production Built and deployed a Docker image to production. The image utilizes a Python 3.8 base and incorporates all essential Python libraries for the application. Fingerprint Report This script automates the collection and consolidation of fingerprint data for a 24-hour period. It gathers data hourly for the first 6 hours and merges them into a single, daily ORC file for further analysis. New instance for Airtel-Astral-USP A new instance with Astral services was spun-up with our Terraform-IaC. USP services were deployed with certs compatible with TR369 device.
###Human:tell me about Status - Week ending on 31/05 ###assistant: *Summary: Internal-Engineering - Gitlab/Jira/AWS/Wiki: ecloud repository’s Make file has now been updated to facilitate the new tags and versioning structure. Making it compatible with CICD-Maestro Internal Environments - QA/Dev: A recommendation ML service has been dockerized and deployed on the cloud-devel server for testing and demonstrations. All required fields for the ML model have been well-defined, and a private library has been written in Python to process the JSON input and transform it into a 2D labeled array for the model to make predictions. The JSON output response is now structured and appropriately handles error messages. Create a new Infra with USP-Controller coexisting with Astral services to be Shared with Airtel. Singtel Production: With the help of existing certs in Huawei Production environment, we created temporary domain for REST and WEB service in VMware setup for *“penetration-testing” purpose. Singtel could consume messages for all of the runtime-streaming and push-notification-feature topics from their Kafka brokers. Still, troubleshooting is in progress from their end for why the messages aren’t been consumed through NAT IP but instead via private subnet range. ELK stack has been deployed in VMware Production environment. Airtel Production: For Zyxel & Dp devices the band-steering configuration is not enabled at the profile level. Created a new profile & analytic group to test the band-steering configuration for around 1k Zyxel devices. As we have started logging the reboot count of devices per day, also added it over the weekly report as well - devices should be rebooted at least 10 times a day and should be repeated for at least 3 times over a week. Airtel production cloud has been upgraded to 2.10.0 Developed a Docker image using Python 3.8 and all necessary dependencies. Successfully deployed it to production. Automated script retrieves fingerprint data hourly for 6 hours, then combines those temporary files into a single daily ORC file.n *Tickets information: Closed Tickets Description AIR-3643 Create & train model for providing top 5 recommended action AIR-3533 JSON query processing DEVOPS-1116 Refactor Make file in ecloud SMN-743 Resurrect ELK stack in VMware Singtel Prod AIR-3734 Enable Band-Steering configuration AIR-3735 Add the reboot count of devices DEVOPS-1159 Upgrade Airtel Production cloud to 2.10.0 DEVOPS-1161 New instance for Astral-USP deployment AIR-3737 Docker image for Python packages In-Progress Tickets Description ETA AIR-3533 LLM based resolution 07/06/2024 AIR-3533 Prompt Engine for Recommendation Service 07/06/2024 SMN-739 Kafka Message consumption from VMware Prod 07/06/2024 SMN-742 Multi domain Cert for Singtel Staging (SAN Certs) 07/06/2024 SMN-744 Penetration testing for VMware Production setup 07/06/2024 AIR-3738 Script for Fingerprint report using Spark 6hours and one hour 05/06/2024 PTO & Vacation: Bharath Rajiv A - 31/05/2024 Plan for next week: Recommendation Service To develop a model for recommendation services, we will begin by defining the response and finalizing the service architecture. Next, we will make plans for integrating machine learning models with the service and identify the required parameters. Finally, we will finalize the model, response, and input JSON format. To develop a Large Language Model (LLM) capable of offering solutions to access point problems, we will leverage the Cat Boost model we have already trained. This LLM will utilize the insights of the Cat Boost model to identify and suggest resolutions to issues related to access points. By integrating the strengths of both models, we aim to enhance our ability to troubleshoot and address these problems more effectively and efficiently. Penetration testing and SFTP connection establishment Start with data migration activity, as we’ve created temporary domain for REST and WEB service in VMware setup for penetration testing purpose. Ensure SFTP server connection has been established from backup server by modifying certain firewall and NAT rules. Logic change in monitoring data Instead of sending the monitoring files generated by self-healing services, need to collect the key values from ORC data using spark. Need to only consider the devices which exceeds the threshold more than 50% out of all the data collected in runtime. Reboot Counts Airtel reported for some of the devices the reboot counts on TR and Embedur doesn’t match. So, for some of the devices we will collect the data at regular interval on both sides for further investigation. Detailed Summary: Recommendation Service A recommendation ML service has been successfully dockerized and deployed on the cloud-devel server for testing and demonstration purposes. All required fields for the model have been meticulously defined to ensure accuracy and consistency. A private library, developed in Python, processes the JSON input by transforming it into a 2D labeled array that the model can utilize to generate predictions. The output response from the model is now structured in JSON format, ensuring clarity and ease of interpretation. Additionally, error handling mechanisms have been implemented to manage and report any issues that arise during the prediction process. This service is currently undergoing testing, and active changes and tweaks are being made by the development team. VMware Penetration testing and Kafka Message consumption From existing certs in Huawei Production environment, created temporary domain for REST () and WEB service () for VMware Prod setup. Francheska from Singtel end ensured that all the messages are being consumed from push-notification service and runtime-streaming topics. ELK Stack setup in Singtel VMware Prod Services pertaining to ELK stack setup has been deployed in VMware Production environment. Ensured that the functionality for Elastic search, Log stash, Kibana and file beat works as expected. Band-Steering Created a new analytic group and profile called - "Zyxel_band_steering" with the band steering enabled on RRM configuration. Using REST APIs changed the analytic group of the 1k Zyxel devices. Self-healing weekly report Used the self-healing reports for the past 7 days which contains the devices rebooted at least 10 times a day. Used a python script to fetch the repeated cases over the period of last days. Cloud Upgrade On 30/5/24 Airtel production cloud has been upgraded to 2.10.0. In this release we have done some improvisations on client model. Geo-location service will be deployed once Airtel creates the policy in firewall. Python Package in Production Built and deployed a Docker image to production. The image utilizes a Python 3.8 base and incorporates all essential Python libraries for the application. Fingerprint Report This script automates the collection and consolidation of fingerprint data for a 24-hour period. It gathers data hourly for the first 6 hours and merges them into a single, daily ORC file for further analysis. New instance for Airtel-Astral-USP A new instance with Astral services was spun-up with our Terraform-IaC. USP services were deployed with certs compatible with TR369 device.
Penetration Testing
Penetration Testing on AWS Resources As we are an AWS customer, we are permitted to conduct penetration testing (pentesting) on our own AWS resources, subject to the guidelines and restrictions outlined in the AWS Penetration Testing policy. This provides an overview of the key considerations and steps involved in performing pentesting on your AWS infrastructure. AWS Penetration Testing Policy AWS allows customers to perform security assessments and penetration tests on their own resources hosted within AWS. However, certain conditions and guidelines must be followed: Customers can conduct pentesting on specific services, such as Amazon EC2 instances, NAT Gateways, and load balancers, without prior approval. For certain services, such as AWS DNS (Route 53), AWS Direct Connect, and AWS Shield, customers must obtain prior approval before testing. Ref: AWS Penetration Testing Scope and Limitations When conducting a pentest on your AWS resources, adhere to the following scope and limitations: Limit the testing scope to your own resources only. Do not perform tests that could impact other AWS customers or AWS infrastructure. Avoid tests that may cause degradation of service, Distributed Denial of Service (DDoS), or excessive resource consumption. Ref: AWS Acceptable Use Policy Notification and Approval Process Before initiating a pentest, notify AWS by following these steps: 1. Submit a request through the AWS Vulnerability / Penetration Testing Request Form. 2. Provide details about your planned testing activities, including start and end dates, specific AWS services to be tested, and source IP addresses. 3. AWS will review the request and provide additional guidance or restrictions if necessary. Ref: AWS Penetration Testing Permitted Testing Techniques AWS allows the following testing techniques: Vulnerability scanning Port scanning Manual testing methods However, the following testing techniques are prohibited or restricted: DNS zone walking DoS attacks or simulating DoS attacks Accessing or attempting to access AWS services or infrastructure not part of your own resources Ref: AWS Acceptable Use Policy Third-Party Services and Applications If your AWS infrastructure hosts third-party services or applications, obtain explicit permission from the third-party provider before including them in your pentest scope. Ensure compliance with any additional terms and conditions set by the third-party provider. Ref: AWS Penetration Testing Legal and Compliance Considerations Ensure that your pentesting activities comply with all applicable laws, regulations, and the AWS Acceptable Use Policy. This includes: Respecting intellectual property rights Adhering to privacy laws and data protection regulations Understanding the legal implications and potential risks associated with pentesting Ref: AWS Acceptable Use Policy (Penetration Testing) Reporting and Responsible Disclosure If you discover any vulnerabilities or security issues during your pentest, responsibly disclose them to AWS through the AWS Vulnerability Reporting process. AWS encourages responsible disclosure and appreciates the efforts of security researchers in identifying and reporting potential security vulnerabilities. Black-Box Penetration Testing Test Plan Objective: The goal of this penetration test is to identify and exploit vulnerabilities in the EC2 instance running specific services to assess the security posture. Scope: This test will focus on the external aspects of the EC2 instance without internal access or prior knowledge. 1. Scope of Work (SoW) In-Scope: The EC2 instance hosting the target services. All publicly accessible services and applications running on the instance. Out-of-Scope: Internal AWS infrastructure and configurations. Any other AWS services not explicitly mentioned as in-scope. 2. Methodology Reconnaissance: Gather information about the target instance using passive and active reconnaissance techniques. Tools: WHOIS, DNS enumeration, Shodan, and other open-source intelligence (OSINT) tools. Enumeration: Identify live services and open ports. Tools: Nmap, Masscan. Vulnerability Scanning: Scan for known vulnerabilities in the identified services. Tools: Nessus, OpenVAS, Nikto. Manual Testing: Perform manual testing to identify vulnerabilities that automated tools might miss. Techniques: SQL injection, Cross-Site Scripting (XSS), authentication bypass, etc. Exploitation: Attempt to exploit identified vulnerabilities to assess their impact. Tools: Metasploit, custom scripts. Post-Exploitation: Assess the extent of the compromise and potential data access. Techniques: Privilege escalation, data extraction. 3. Rules of Engagement Allowed Actions: Network scanning and enumeration. Vulnerability scanning and manual testing. Exploitation of identified vulnerabilities. Prohibited Actions: Denial of service (DoS) attacks. Any activity that could cause disruption to the services. Testing Window: Preferred time frames for conducting the tests to minimize operational impact. 4. Estimated Timeline Reconnaissance: 1-2 days. Enumeration: 1-2 days. Vulnerability Scanning: 1-2 days. Manual Testing: 3-5 days. Exploitation: 2-3 days. Reporting: 2-3 days. 5. Reporting Preliminary Report: Initial findings and potential critical vulnerabilities identified during the test. Final Report: Comprehensive report including: Executive summary. Detailed findings with severity ratings. Evidence (with screenshots). * Recommendations.
###Human:tell me about Penetration Testing ###assistant: Penetration Testing on AWS Resources As we are an AWS customer, we are permitted to conduct penetration testing (pentesting) on our own AWS resources, subject to the guidelines and restrictions outlined in the AWS Penetration Testing policy. This provides an overview of the key considerations and steps involved in performing pentesting on your AWS infrastructure. AWS Penetration Testing Policy AWS allows customers to perform security assessments and penetration tests on their own resources hosted within AWS. However, certain conditions and guidelines must be followed: Customers can conduct pentesting on specific services, such as Amazon EC2 instances, NAT Gateways, and load balancers, without prior approval. For certain services, such as AWS DNS (Route 53), AWS Direct Connect, and AWS Shield, customers must obtain prior approval before testing. Ref: AWS Penetration Testing Scope and Limitations When conducting a pentest on your AWS resources, adhere to the following scope and limitations: Limit the testing scope to your own resources only. Do not perform tests that could impact other AWS customers or AWS infrastructure. Avoid tests that may cause degradation of service, Distributed Denial of Service (DDoS), or excessive resource consumption. Ref: AWS Acceptable Use Policy Notification and Approval Process Before initiating a pentest, notify AWS by following these steps: 1. Submit a request through the AWS Vulnerability / Penetration Testing Request Form. 2. Provide details about your planned testing activities, including start and end dates, specific AWS services to be tested, and source IP addresses. 3. AWS will review the request and provide additional guidance or restrictions if necessary. Ref: AWS Penetration Testing Permitted Testing Techniques AWS allows the following testing techniques: Vulnerability scanning Port scanning Manual testing methods However, the following testing techniques are prohibited or restricted: DNS zone walking DoS attacks or simulating DoS attacks Accessing or attempting to access AWS services or infrastructure not part of your own resources Ref: AWS Acceptable Use Policy Third-Party Services and Applications If your AWS infrastructure hosts third-party services or applications, obtain explicit permission from the third-party provider before including them in your pentest scope. Ensure compliance with any additional terms and conditions set by the third-party provider. Ref: AWS Penetration Testing Legal and Compliance Considerations Ensure that your pentesting activities comply with all applicable laws, regulations, and the AWS Acceptable Use Policy. This includes: Respecting intellectual property rights Adhering to privacy laws and data protection regulations Understanding the legal implications and potential risks associated with pentesting Ref: AWS Acceptable Use Policy (Penetration Testing) Reporting and Responsible Disclosure If you discover any vulnerabilities or security issues during your pentest, responsibly disclose them to AWS through the AWS Vulnerability Reporting process. AWS encourages responsible disclosure and appreciates the efforts of security researchers in identifying and reporting potential security vulnerabilities. Black-Box Penetration Testing Test Plan Objective: The goal of this penetration test is to identify and exploit vulnerabilities in the EC2 instance running specific services to assess the security posture. Scope: This test will focus on the external aspects of the EC2 instance without internal access or prior knowledge. 1. Scope of Work (SoW) In-Scope: The EC2 instance hosting the target services. All publicly accessible services and applications running on the instance. Out-of-Scope: Internal AWS infrastructure and configurations. Any other AWS services not explicitly mentioned as in-scope. 2. Methodology Reconnaissance: Gather information about the target instance using passive and active reconnaissance techniques. Tools: WHOIS, DNS enumeration, Shodan, and other open-source intelligence (OSINT) tools. Enumeration: Identify live services and open ports. Tools: Nmap, Masscan. Vulnerability Scanning: Scan for known vulnerabilities in the identified services. Tools: Nessus, OpenVAS, Nikto. Manual Testing: Perform manual testing to identify vulnerabilities that automated tools might miss. Techniques: SQL injection, Cross-Site Scripting (XSS), authentication bypass, etc. Exploitation: Attempt to exploit identified vulnerabilities to assess their impact. Tools: Metasploit, custom scripts. Post-Exploitation: Assess the extent of the compromise and potential data access. Techniques: Privilege escalation, data extraction. 3. Rules of Engagement Allowed Actions: Network scanning and enumeration. Vulnerability scanning and manual testing. Exploitation of identified vulnerabilities. Prohibited Actions: Denial of service (DoS) attacks. Any activity that could cause disruption to the services. Testing Window: Preferred time frames for conducting the tests to minimize operational impact. 4. Estimated Timeline Reconnaissance: 1-2 days. Enumeration: 1-2 days. Vulnerability Scanning: 1-2 days. Manual Testing: 3-5 days. Exploitation: 2-3 days. Reporting: 2-3 days. 5. Reporting Preliminary Report: Initial findings and potential critical vulnerabilities identified during the test. Final Report: Comprehensive report including: Executive summary. Detailed findings with severity ratings. Evidence (with screenshots). * Recommendations.
Infrastructure Design
Model Zoo Internal Prod Architecture
###Human:tell me about Infrastructure Design ###assistant: Model Zoo Internal Prod Architecture
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
2
Edit dataset card