반응형
반응형

[workout] Get Back to Basics with this Bodyweight/Mobility Workout!

 

https://www.youtube.com/watch?v=uXyzx6zmTbA

반응형
반응형

반야심경 한글 번역, 摩訶般若波羅蜜多心經 마하반야바라밀다심경 

 

https://youtu.be/5jXcoSDWdH4?si=yLXg-jLoeTfW9ncq

 

摩訶般若波羅蜜多心經

마하반야바라밀다심경

있는 그대로의 진리를 가리키는 핵심이 되는 말씀

觀自在菩薩 行深般若波羅蜜多時 照見五蘊皆空 度一切苦厄

관자재보살 행심반야바라밀다시 조견오온개공 도일체고액

관자재보살은 세상의 실체를 가리키는 깊은 진리의 표현이기에, 세상 모든 것이 공함을 바로 보면 모든 어려움을 넘어 그 실체 닿느니라.

舍利子 色不異空 空不異色 色卽是空 空卽是色 受想行識 亦復如是

사리자 색불이공 공불이색 색즉시공 공즉시색 수상행식 역부여시

사리자여, 물질이 공과 다르지 않고 공이 물질과 다르지 않기에 물질이 곧 공이고 공이 곧 물질이니, 감각과 인식과 생각과 의식도 그러하니라.

舍利子 是諸法空相 不生不滅 不垢不淨 不增不減

사리자 시제법공상 불생불멸 불구부정 부증불감

사리자여, 세상에 나타나는 모든 현상이 공하기에 생겨나는 것도 없고 사라지는 것도 없으며, 더러운 것도 없고 깨끗한 것도 없으며, 늘어나는 것도 없고 줄어드는 것도 없느니라.

是故 空中無色無受想行識

시고 공중무색무수상행식

이렇게 공하기에 물질도 실체가 따로 없고 감각과 인식과 생각과 의식도 실체가 따로 없느니라.

無眼耳鼻舌身意 無色聲香味觸法 無眼界 乃至 無意識界

무안이비설신의 무색성향미촉법 무안계 내지 무의식계

눈과 귀와 코와 혀와 몸과 의식도 실체가 따로 없으며 색깔과 소리와 향기와 맛과 감촉과 그 현상도 실체가 따로 없기에 본다는 것과 본 것을 의식한다는 것 사이에는 어떤 구분도 없느니라.

無無明 亦無無明盡 乃至 無老死 亦無老死盡

무무명 역무무명진 내지 무노사 역무노사진

이런 사실을 모른다고 해서 달라지는 것도 없고 안다고 해서 달라지는 것도 없으며, 심지어 늙고 죽는 것이 없기에 늙고 죽는 것에서 벗어나는 일도 없느니라.

無苦集滅道 無智 亦無得

무고집멸도 무지 역무득

괴로움이 실체가 없기에 괴로움의 원인도 괴로움의 사라짐도 괴로움을 사라지게 하는 방법도 없고, 지혜가 따로 없기에 얻을 수 있는 지혜 또한 없느니라.

以無所得故 菩提薩埵 依般若波羅蜜多

이무소득고 보리살타 의반야바라밀다

이렇게 얻을 것이 아무것도 없으므로 찾는 이는 오직 있는 그대로의 진리가 드러나기만을 바라야 하느니라.

故心無罣礙 無罣礙故 無有恐怖 遠離顚倒夢想 究竟涅槃

고심무가애 무가애고 무유공포 원리전도몽상 구경열반

그러면 마음에 걸리는 것이 없고, 걸릴 것이 없으면 두려울 것이 없어서, 모든 거짓 믿음을 넘어 어떤 의문도 남지 않는 있는 그대로의 진리가 드러나느니라.

三世諸佛 依般若波羅蜜多 故得阿耨多羅三藐三菩提

삼세제불 의반야바라밀다 고득아뇩다라삼먁삼보리

예전에도 지금도 그리고 앞으로도 모든 부처는 오직 있는 그대로의 진리에 눈을 뜨면서 궁극적 깨달음이 일어나고 찾음을 온전히 끝내느니라.

故知般若波羅蜜多 是大神呪 是大明呪 是無上呪 是無等等呪 能除 一切苦 眞實不虛

고지반야바라밀다 시대신주 시대명주 시무상주 시무등등주 능제 일체고 진실불허

그러니 명심하기를, 있는 그대로의 진리를 바로 보는 것만이 가장 신비하고 확실한 길이며 무엇과도 견줄 수 없는 최고의 방법이기에 능히 모든 어려움을 뛰어넘어 진실에 닿기에 헛되지가 않느니라.

故說般若波羅蜜多呪 卽說呪曰

고설반야바라밀다주 즉설주왈

그래서 일러주리니 다음과 같이 말하며 있는 그대로의 진리에 눈을 뜨거라.

揭諦揭諦 波羅揭諦 波羅僧揭諦 菩提 娑婆訶

아제아제 바라아제 바라승아제 모지 사바하

있다. 있다. 모두 있다. 바로 지금 여기 모두 있음에 눈뜨게 하옵소서.

揭諦揭諦 波羅揭諦 波羅僧揭諦 菩提 娑婆訶

아제아제 바라아제 바라승아제 모지 사바하

있다. 있다. 모두 있다. 바로 지금 여기 모두 있음에 눈뜨게 하옵소서.

揭諦揭諦 波羅揭諦 波羅僧揭諦 菩提 娑婆訶

아제아제 바라아제 바라승아제 모지 사바하

있다. 있다. 모두 있다. 바로 지금 여기 모두 있음에 눈뜨게 하옵소서.

한글 반야심경 2020, 반야심경 한글 번역, 관음 옮김 

 

https://blog.naver.com/advaita2007/222027602818

 

반야심경 한글 번역과 해석

반야심경은 많은 이에게 사랑받는 가리킴이다. 그런데 대중적 인기만큼이나 오해도 깊다. 세상에 널리 퍼져...

blog.naver.com

 

https://www.yes24.com/Product/Goods/114899186

반응형
반응형

왜 책을 읽어야 할까?
1년에 책 한 권 안 읽는 사람이
성인 10명 중 6명이라고 한다. 정보가
흘러넘치는 디지털 시대에 책을 읽어야 하는
이유는 뭘까? 독서를 해야 하는 이유는 제대로
읽고 판단하는 능력, 즉 문해력 때문이다. 문해력은
그저 글을 읽거나 단어의 의미를 아는 것을
뜻하지 않는다. 다양한 맥락으로 연결된
자료를 통해 정보를 발굴하고 이해하고
새롭게 해석하여 소통까지 이르는
능력을 뜻한다.


- 김을호의 《결국 독서력이다》 중에서 -


* 문해력은 실력입니다.
세상을 헤쳐가는 막강한 무기입니다.
문해력의 출발은 독서입니다. 그러나 단지 책을 읽고
해독하는 것에만 국한되지 않습니다. 우리는 서로 대화를
하면서도 소통의 부재를 느낍니다. 상대가 무슨 말을 하는지
제대로 이해하지 못하는 사람들이 늘어갑니다. 왜일까요?
문해력, 청해력의 부족 때문입니다. 문해력은 단숨에
늘어나지 않습니다. 어릴 적부터 훈련이 되어야
가능합니다. 그래서 독서가 필요합니다.

반응형

'아침편지' 카테고리의 다른 글

'나는 건강해 행복해, 그러니 감사해'  (0) 2024.08.30
'아, 좋다!' 하는 기분  (0) 2024.08.29
빛과 어둠  (0) 2024.08.27
당신이 행복하면 나도 행복하다  (0) 2024.08.26
솔밭  (0) 2024.08.26
반응형

The Step-Through | Beginners Breakdown!

https://www.youtube.com/watch?v=OGEUeBvoKc0

 

 

This exercise is one of my all-time favorite movements and although it gained popularity through “animal” based movement practices, the Step-Through origins can be traced back to the warm-ups, solo drills, and movement prep of many forms of Martial Arts and combat sports, more specifically a variation can be found being performed in Jiu-Jitsu and wrestling!

The Step-Through is a bodyweight exercise that is not so easy to perform and master. Engaging the core and maintaining proper stability throughout the movement is only a few of the important steps to keep in mind during this exercise, which will really test your bodyweight strength, coordination, and balance.

Breaking this drill down, and practicing Mountain Climbers and Sit Throughs beforehand, will really help you to build enough strength and mobility to perform the Step-Through.

Enjoy!

Thanks for watching/reading :)

If you're interested in learning this movement (and many others) in more detail, check out our online Mobility program, which comes with over 50 follow-along, verbal tutorial videos!
https://www.phase6online.com/product/...

Also, check out my IG page for plenty of bodyweight workouts, movement preps, and decompression circuits, alongside tutorials for my favorite movements!
  / steph.rose.phase6

반응형
반응형

개발 시간을 절반으로 단축하는 25가지 오픈 소스 AI 도구

25 Open Source AI Tools to Cut Your Development Time in Half

 

https://jozu.com/blog/25-open-source-ai-tools-to-cut-your-development-time-in-half/

 

25 Open Source AI Tools to Cut Your Development Time in Half - Jozu MLOps

Discover 25 open-source tools to streamline your AI projects from development to production.

jozu.com

Each ML/AI project stakeholder requires specialized tools that efficiently enable them to manage the various stages of an ML/AI project, from data preparation and model development to deployment and monitoring. They tend to use specialized open source tools because of their contribution as a significant catalyst to the advancement, development, and ease of AI projects. As a result, numerous open source AI tools have emerged over the years, making it challenging to pick from the available options.

This article highlights some factors to consider when picking open source tools and introduces you to 25 open-source options that you can use for your AI project.

Picking open source tools for AI project

The open source tooling model has allowed companies to develop diverse ML tools to help you handle particular problems in an AI project. The AI tooling landscape is already quite saturated with tools, and the abundance of options makes tool selection difficult. Some of these tools even provide similar solutions. You may be tempted to lean toward adopting tools just because of the enticing features they present. However, there are other crucial factors that you should consider before selecting a tool, which include:

  • Popularity
  • Impact
  • Innovation
  • Community engagement
  • Relevance to emerging AI trends.

Popularity

Widely adopted tools often indicate active development, regular updates, and strong community support, ensuring reliability and longevity.

Impact

A tool with a track record of addressing pain points, delivering measurable improvements, providing long-term project sustainability, and adapting to evolving needs of the problems of an AI project is a good measure of an impactful tool that stakeholders are interested in leveraging.

Innovation

Tools that embrace more modern technologies and offer unique features demonstrate a commitment to continuous improvement and have the potential to drive advancements and unlock new possibilities.

Community engagement

Active community engagement fosters collaboration, provides support, and ensures a tool's continued relevance and improvement.

Relevance to emerging AI trends

Tools aligned with emerging trends like LLMs enable organizations to leverage the latest capabilities, ensuring their projects remain at the forefront of innovation.

25 open source tools for your AI project

Based on these factors, here are 25 tools that you and the different stakeholders on your team can use for various stages in your AI project.

1. KitOps

Multiple stakeholders are involved in the machine learning development lifecycle which requires different MLOps tools and environments at various stages of the AI project., which makes it hard to guarantee an organized, portable, transparent, and secure model development pipeline.

This introduces opportunities for model lineage breaks and accidental or malicious model tampering or modifications during model development. Since the contents of a model are a "black box”—without efficient storage and lineage—it is impossible to know if a model's or model artifact's content has been tampered with between model development, staging, deployment, and retirement pipelines.

KitOps provides AI project stakeholders with a secure package called ModelKit that they can use to share and manage models, code, metadata, and artifacts throughout the ML development lifecycle.

The ModelKit is an immutable OCI-standard artifact that leverages normal container-native technologies (similar to Docker and Kubernetes), making them seamlessly interoperable and portable across various stakeholders using common software tools and environments. As an immutable package, ModelKit is tamper-proof. This tamper-proof property provides stakeholders with a versioning system that tracks every single update to any of its content (i.e., models, code, metadata, and artifacts) throughout the ML development and deployment pipelines.

2. LangChain

LangChain is a machine learning framework that enables ML engineers and software developers to build end-to-end LLM applications quickly. Its modular architecture allows them to easily mix and match its extensive suite of components to create custom LLM applications.

LangChain simplifies the LLM application's development and deployment stages with its ecosystem of interconnected parts, consisting of LangSmith, LangServe, and LangGraph. Together, they enable ML engineers and software developers to build robust, diverse, and scaleable LLM applications efficiently.

LangChain enables professionals without a strong AI background to easily build an application with large language models (LLMs).

3. Pachyderm

Pachyderm is a data versioning and management platform that enables engineers to automate complex data transformations. It uses a data infrastructure that provides data lineage via a data-driven versioning pipeline. The version-controlled pipelines are automatically triggered based on changes in the data. It tracks every modification to the data, making it simple to duplicate previous results and test with various pipeline versions.

Pachyderm's data infrastructure provides "data-aware" pipelines with versioning and lineage.

4. ZenML

ZenML is a structured MLOps framework that abstracts the creation of MLOps pipelines, allowing data scientists and ML engineers to focus on the core steps of data preprocessing, model training, evaluation, and deployment without getting bogged down in infrastructure details.

ZenML framework abstracts MLOps infrastructure complexities and simplifies the adoption of MLOps, making the AI project components accessible, reusable, and reproducible.

5. Prefect

Prefect is an MLOps orchestration framework for machine learning pipelines. It uses the concepts of tasks (individual units of work) and flows (sequences of tasks) to construct an ML pipeline for running different steps of an ML code, such as feature engineering and training. This modular structure enables ML engineers to simplify creating and managing complex ML workflows.

Prefect simplifies data workflow management, robust error handling, state management, and extensive monitoring.

6. Ray

Ray is a distributed computing framework that makes it easy for data scientists and ML engineers to scale machine learning workloads during model development. It simplifies scaling computationally intensive workloads, like loading and processing extensive data or deep learning model training, from a single machine to large clusters.

Ray's core distributed runtime, making it easy to scale ML workloads.

7. Metaflow

Metaflow is an MLOps tool that enhances the productivity of data scientists and ML engineers with a unified API. The API offers a code-first approach to building data science workflows, and it contains the whole infrastructure stack that data scientists and ML engineers need to execute AI projects from prototype to production.

8. MLflow

MLflow allows data scientists and engineers to manage model development and experiments. It streamlines your entire model development lifecycle, from experimentation to deployment.

MLflow’s key features include:
MLflow tracking: It provides an API and UI to record and query your experiment, parameters, code versions, metrics, and output files when training your machine learning model. You can then compare several runs after logging the results.

MLflow projects: It provides a standard reusable format to package data science code and includes API and CLI to run projects to chain into workflows. Any Git repository / local directory can be treated as an MLflow project.

MLflow models: It offers a standard format to deploy ML models in diverse serving environments.

MLflow model registry: It provides you with a centralized model store, set of APIs, and UI, to collaboratively manage the full lifecycle of a model. It also enables model lineage (from your model experiments and runs), model versioning, and development stage transitions (i.e., moving a model from staging to production).

9. Kubeflow

Kubeflow is an MLOps toolkit for Kubernetes. It is designed to simplify the orchestration and deployment of ML workflows on Kubernetes clusters. Its primary purpose is to make scaling and managing complex ML systems easier, portable, and scalable across different infrastructures.

Kubeflow is a key player in the MLOps landscape, and it introduced a robust and flexible platform for building, deploying, and managing machine learning systems on Kubernetes. This unified platform for developing, deploying, and managing ML models enables collaboration among data scientists, ML engineers, and DevOps teams.

10. Seldon core

Seldon core is an MLOps platform that simplifies the deployment, serving, and management of machine learning models by converting ML models (TensorFlow, PyTorch, H2o, etc.) or language wrappers (Python, Java, etc.) into production-ready REST/GRPC microservices. Think of them as pre-packaged inference servers or custom servers. Seldon core also enables the containerization of these servers and offers out-of-the-box features like advanced metrics, request logging, explainers, outlier detectors, A/B tests, and canaries.

Seldon Core's solution focuses on model management and governance. Its adoption is geared toward ML and DevOps engineers, specifically for model deployment and monitoring, instead of small data science teams.

11. DVC (Data Version Control)

Implementing version control for machine learning projects entails managing both code and the datasets, ML models, performance metrics, and other development-related artifacts. Its purpose is to bring the best practices from software engineering, like version control and reproducibility, to the world of data science and machine learning. DVC enables data scientists and ML engineers to track changes to data and models like Git does for code, making it able to run on top of any Git repository. It enables the management of model experiments.

DVC's integration with Git makes it easier to apply software engineering principles to data science workflows.

12. Evidently AI

EvidentlyAI is an observability platform designed to analyze and monitor production machine learning (ML) models. Its primary purpose is to help ML practitioners understand and maintain the performance of their deployed models over time. Evidently provides a comprehensive set of tools for tracking key model performance metrics, such as accuracy, precision, recall, and drift detection. It also enables stakeholders to generate interactive reports and visualizations that make it easy to identify issues and trends.

13. Mage AI

Mage AI is a data transforming and integrating framework that allows data scientists and ML engineers to build and automate data pipelines without extensive coding. Data scientists can easily connect to their data sources, ingest data, and build production-ready data pipelines within Mage notebooks.

14. ML Run

ML Run provides a serverless technology for orchestrating end-to-end MLOps systems. The serverless platform converts the ML code into scalable and managed microservices. This streamlines the development and management pipelines of the data scientists, ML, software, and DevOps/MLOps engineers throughout the entire machine learning (ML) lifecycle, across their various environments.

15. Kedro

Kedro is an ML development framework for creating reproducible, maintainable, modular data science code. Kedro improves AI project development experience via data abstraction and code organization. Using lightweight data connectors, it provides a centralized data catalog to manage and track datasets throughout a project. This enables data scientists to focus on building production level code through Kedro's data pipelines, enabling other stakeholders to use the same pipelines in different parts of the system.

Kedro focuses on data pipeline development by enforcing SWE best practices for data scientists.

16. WhyLogs

WhyLogs by WhyLabs is an open-source data logging library designed for machine learning (ML) models and data pipelines. Its primary purpose is to provide visibility into data quality and model performance over time.

With WhyLogs, MLOps engineers can efficiently generate compact summaries of datasets (called profiles) that capture essential statistical properties and characteristics. These profiles track changes in datasets over time, helping detect data drift – a common cause of model performance degradation. It also provides tools for visualizing key summary statistics from dataset profiles, making it easy to understand data distributions and identify anomalies.

17. Feast

Defining, storing, and accessing features for model training and online inference in silos (i.e., from different locations) can lead to inconsistent feature definitions, data duplication, complex data access and retrieval, etc. Feast solves the challenge of stakeholders managing and serving machine learning (ML) features in development and production environments.

Feast is a feature store that bridges the gap between data and machine learning models. It provides a centralized repository for defining feature schemas, ensuring consistency across different teams and projects. This can ensure that the feature values used for model inference are consistent with the state of the feature at the time of the request, even for historical data.

Feast is a centralized repository for managing, storing, and serving features, ensuring consistency and reliability across training and serving environments.

18. Flyte

Data scientists and data and analytics pipeline engineers typically rely on ML and platform engineers to transform models and training pipelines into production-ready systems.

Flyte empowers data scientists and data and analytics engineers with the autonomy to work independently. It provides them with a Python SDK for building workflows, which can then be effortlessly deployed to the Flyte backend. This simplifies the development, deployment, and management of complex ML and data workflows by building and executing reliable and reproducible pipelines at scale.

19. Featureform

The ad-hoc practice of data scientists developing features for model development in isolation makes it difficult for other AI project stakeholders to understand, reuse, or build upon existing work. This leads to duplicated effort, inconsistencies in feature definitions, and difficulties in reproducing results.

Featureform is a virtual feature store that streamlines data scientists' ability to manage and serve features for machine learning models. It acts as a "virtual" layer over existing data infrastructure like Databricks and Snowflake. This allows data scientists to engineer and deploy features directly to the data infrastructure for other stakeholders. Its structured, centralized feature repository and metadata management approach empower data scientists to seamlessly transition their work from experimentation to production, ensuring reproducibility, collaboration, and governance throughout the ML lifecycle.

20. Deepchecks

Deepchecks is an ML monitoring tool for continuously testing and validating machine learning models and data from an AI project's experimentation to the deployment stage. It provides a wide range of built-in checks to validate model performance, data integrity, and data distribution. These checks help identify issues like model bias, data drift, concept drift, and leakage.

21. Argo

Argo provides a Kubernetes-native workflow engine for orchestrating parallel jobs on Kubernetes. Its primary purpose is to streamline the execution of complex, multi-step workflows, making it particularly well-suited for machine learning (ML) and data processing tasks. It enables ML engineers to define each step of the ML workflow (data preprocessing, model training, evaluation, deployment) as individual containers, making it easier to manage dependencies and ensure reproducibility.

Argo workflows are defined using DAGs, where each node represents a step in the workflow (typically a containerized task), and edges represent dependencies between steps. Workflows can be defined as a sequence of tasks (steps) or as a Directed Acyclic Graph (DAG) to capture dependencies between tasks.

22. Deep Lake

Deep Lake (formerly Activeloop Hub) is an ML-specific database tool designed to act as a data lake for deep learning and a vector store for RAG applications. Its primary purpose is accelerating model training by providing fast and efficient access to large-scale datasets, regardless of format or location.

23. Hopsworks feature store

Advanced MLOps pipelines with at least an MLOps maturity level 1 architecture require a centralized feature store. Hopsworks is a perfect feature store for such architecture. It provides an end-to-end solution for managing ML feature lifecycle, from data ingestion and feature engineering to model training, deployment, and monitoring. This facilitates feature reuse, consistency, and faster model development.

24. NannyML

NannyML is a Python library specialized in post-deployment monitoring and maintenance of machine learning (ML) models. It enables data scientists to detect and address silent model failure, estimate model performance without immediate ground truth data, and identify data drift that might be responsible for performance degradation.

25. Delta Lake

Delta Lake is a storage layer framework that provides reliability to data lakes. It addresses the challenges of managing large-scale data in lakehouse architectures, where data is stored in an open format and used for various purposes, like machine learning (ML). Data engineers can build real-time pipelines or ML applications using Delta Lake because it supports both batch and streaming data processing. It also brings ACID (atomicity, consistency, isolation, durability) transactions to data lakes, ensuring data integrity even with concurrent reads and writes from multiple pipelines.

Considering factors like popularity, impact, innovation, community engagement, and relevance to emerging AI trends can help guide your decision when picking open source AI/ML tools, especially for those offering the same value proposition. In some cases, such tools may have different ways of providing solutions for the same use case or possess unique features that make them perfect for a specific project use case.

반응형
반응형

고통스러운 일상이라도
늘 고통스럽지만은 않다.
점심까지만 해도 뭐라도 올 것처럼
잔뜩 찌푸렸던 하늘이 언제 그랬냐는 듯
청명하게 갠 오후를 보여 주기도 하지 않은가?
작은 블라인드 틈 사이로 강렬한 햇빛이 파고들더니
그대로 책상 앞까지 가득한 걸 보면
감동 그 자체 아닌가?


- 김범준의 《지옥에 다녀온 단테》 중에서 -


* 모든 것은 양면을 가집니다.
고통이 있기에 영광이 있고, 영광에 탐닉하다
천 길 추락을 경험합니다. 어둠은 빛을, 빛은 어둠을
낳습니다. 모든 것은 가장 알맞은 때에, 알맞은
모습으로 드러납니다. 어려운 국면에서 길이
보이지 않는다면 기다림으로 희망을 갖고,
너무 잘나가는 듯하면 절제와 겸손으로
몸을 낮추어야 합니다.

반응형

'아침편지' 카테고리의 다른 글

'아, 좋다!' 하는 기분  (0) 2024.08.29
왜 책을 읽어야 할까?  (0) 2024.08.28
당신이 행복하면 나도 행복하다  (0) 2024.08.26
솔밭  (0) 2024.08.26
비교를 하면 할수록  (0) 2024.08.23

+ Recent posts