Skip to content
View TristanKruse's full-sized avatar
๐ŸŽฏ
Focusing
๐ŸŽฏ
Focusing

Highlights

  • Pro

Block or report TristanKruse

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 250 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
TristanKruse/README.md

Tristan Kruse

About Me

  • ๐ŸŽ“ Education:

    • Master's in Business Analytics & Operations Research (Supply Chain Management), Catholic University of Eichstรคtt-Ingolstadt.
    • Bachelor's in Business Management (Supply Chain Management), Bonn-Rhein-Sieg University of Applied Sciences.
  • ๐Ÿ’ผ Work Experience:

    • Data Analyst at Genpact, specializing in supply chain analytics, dashboard development, and data optimization.
  • ๐ŸŒ Connect:

  • ๐Ÿ“– Interests:

    • Beyond my passion for data, I enjoy playing chess, reading, listening to podcasts, exploring mathematics, learning new languages, and practicing martial arts.

Welcome to My GitHub! ๐ŸŒŸ

Welcome to my GitHub! I'm Tristan Kruse, a passionate professional with a strong focus on Supply Chain Management and Data Analytics, I'm deeply interested in leveraging technology to solve complex supply chain challenges.

๐Ÿ’ป My work involves tools and technologies like Python, SQL, AWS, and Tableau, with applications in reinforcement learning, forecasting, and decision optimization.

๐ŸŒŸ Highlights:

  • Data Analyst at Genpact, developing dashboards and optimizing data workflows to enhance supply chain operations.
  • Modeled and solved complex supply chain problems, including the Beer Game, using Reinforcement Learning.
  • Worked on innovative projects like EnginBERT for engineering literature retrieval and a returns optimization model using Neural Networks.

Thanks for visiting!

๐Ÿ”ฌ Featured Projects

Reinforcement Learning Assignment Algorithm (RL-ACA)

A reinforcement learning approach to optimize food delivery logistics, addressing the Restaurant Meal Delivery Problem through an enhanced Anticipatory Customer Assignment framework. The project introduces RL-ACA, a novel algorithm that uses dynamic postponement strategies learned through Deep Q-Networks to optimize delivery assignment and bundling decisions. The system is comprehensively validated using real-world Meituan data (647,395 orders across 22 districts) and features statistical analysis across multiple operational contexts.

Repo: RMDP_Algorithm Technologies: Python, PyTorch (Deep Q-Network), NumPy, Pandas, Statistical Analysis, Real-time

Highlights:

  • Achieves a 5.5% reduction in average distance per order and 1.5 percentage point lower idle rates through intelligent postponement decisions, improving driver efficiency and platform sustainability.
  • Demonstrates superior performance in high-stress scenarios with 4.4 percentage point advantage in on-time delivery rates, showcasing adaptability under operational pressure.
  • Validates performance across 120 real-world scenarios with statistical significance testing, providing robust evidence of algorithm effectiveness in diverse urban delivery contexts.
  • Features comprehensive benchmarking framework comparing RL-ACA against baseline methods, with detailed analysis of trade-offs between routing efficiency and delivery timeliness across stakeholder priorities.

OmniChannel-RL

A reinforcement learning framework for modeling returns and decision-making in omnichannel retail. The project leverages a Hierarchical Markov Decision Process (HMDP) and the Proximal Policy Optimization (PPO) algorithm to optimize ordering and allocation strategies for retailers operating online and offline channels with resellable returns.

Repo: OmniChannel-RL Technologies: Python, TensorFlow, Keras, NumPy, Gym Highlights: Achieved a 3% reduction in total costs and up to a 17% increase in service levels. The model is based on the framework outlined in the paper J. Goedhart, R. Haijema, and R. Akkerman (2023), showcasing the potential of reinforcement learning in complex, hierarchical decision-making environments.

Beer-Game-RL

A reinforcement learning solution for the Supply Chain Beer Game. Modeled the game's supply chain dynamics in Python (NumPy, Pandas) and implemented Q-Learning to optimize ordering strategies. Achieved a 31% reduction in total costs by modifying the state space, demonstrating the potential of RL in supply chain optimization.

Repo: Beer-Game-RL Technologies: Python, NumPy, Pandas, Q-Learning Highlights: Combines supply chain simulation with reinforcement learning to explore automated decision-making and cost minimization in complex systems.

๐Ÿ› ๏ธ Languages and Tools:

Languages

python SQL R

Visualization

tableau quicksight

Pinned Loading

  1. Modelling_returns_omni-channel_retail_Reinforcement_Learning Modelling_returns_omni-channel_retail_Reinforcement_Learning Public

    Solving a Hierarchical Markov Decision Process with a PPO-Algorithm.

    Python 3 1

  2. RMDP_Algorithm RMDP_Algorithm Public

    Python 1

  3. janMagnusHeimann/EnginBERT janMagnusHeimann/EnginBERT Public

    Python 6

  4. Beer-Game-RL Beer-Game-RL Public

    Solving the beer game with Q-Learning

    Python 1