IT Engineer
Role Summary
Maintain and configure Windows Server and Ubuntu environments with focus on automation, patch management, and service reliability across multiple environments. Manage SQL Server databases including scripting, backups, data transformation, and integration with web application. Lead infrastructure automation initiatives, cloud cost optimization, and support DevOps practices through custom scripting, performance monitoring, and infrastructure-as-code strategies.
Systems / Scope
- Product: Bee360 enterprise PPM (Project Portfolio Management) platform
- Infrastructure: Multi-environment setup (PROD, TEST/EDU, DEV merged), 6+ server instances per environment
- Stakeholders: IT Governance team, Financial team, Project Managers, Development team
- Scale: AWS cloud infrastructure (EC2, RDS), Docker containerization, SystemD automation across all environments
- Data Pipeline: Bee360 database → Power BI dashboards, SAP → Bee360 ETL pipeline
Key Achievements
- Maintained and configured Windows Server and Ubuntu environments with focus on automation, patch management, and service reliability across multiple environments
- Managed SQL Server databases including scripting, backups, data transformation, and integration with web application, ensuring data reliability and performance
- Optimized cloud infrastructure for cost-efficiency by redesigning architectures, rightsizing services, and leveraging reserved instances where applicable
- Supported DevOps practices through custom scripting, performance monitoring, and infrastructure-as-code strategies
- Developed automated data pipeline connecting application to Power BI, enabling real-time business insights for IT Governance team
- Reduced AWS cloud infrastructure costs by €5,200/month through environment consolidation and right-sizing
- Built replicable ETL infrastructure automation for SAP-Bee360 data quality validation and file processing
Interview Stories
SAP-Bee360 Interface Enhancement - Reducing Periodic Errors
- Situation: The SAP to Bee360 interface was experiencing periodic import failures due to data quality issues in CSV files. Problematic characters (like exclamation marks and quotation marks) entered in SAP were breaking the Bee360 import interface, causing manual intervention and data correction work.
- Complication: The errors were recurring and required manual file cleanup before import. Each failure meant delays in data processing and manual work to identify and fix problematic characters in CSV files. The process was time-consuming and error-prone.
- Actions: I developed a Ruby-based ETL infrastructure automation solution to handle data quality validation and file processing. The solution included:
- Automated character cleanup functions to remove problematic characters before import
- Character analysis tools to proactively identify issues
- File comparison utilities to detect new problematic patterns
- Automated ETL pipeline with file classification, header management, and archive creation
- Version-controlled automation scripts in Git repository for replicability
- Result: The automation significantly reduced periodic import errors by handling data quality issues automatically. The solution processes multiple file types (OPEX, CAPEX, CAPEX depreciation) with standardized workflows, eliminating manual file cleanup work and reducing import failures. The infrastructure has been running reliably, improving data pipeline reliability.
- What I learned: Proactive data quality validation and automation can prevent recurring issues more effectively than reactive manual fixes. Building reusable ETL infrastructure with version control ensures the solution can be maintained and improved over time.
Technical Interview Participation - Building the Team
- Situation: The team needed to hire a new colleague to support growing infrastructure and database responsibilities. I was asked to participate in the technical interview process to assess candidates' technical capabilities and cultural fit.
- Complication: Finding the right candidate required balancing technical skills (database, infrastructure, automation) with ability to work independently, learn quickly, and contribute to team knowledge sharing.
- Actions: I participated in technical interviews, evaluating candidates on:
- Database and SQL knowledge
- Infrastructure and automation understanding
- Problem-solving approach
- Communication and documentation skills
- Cultural fit with team dynamics
- Result: We successfully identified and hired a strong candidate who has been working with the team for approximately 2-3 years. The colleague has integrated well into the team and contributes to infrastructure and database work, validating the interview process and selection.
- What I learned: Participating in hiring decisions helps ensure team growth with candidates who complement existing skills and share similar values around automation, documentation, and technical excellence. The experience also provided perspective on how to present technical work during interviews.
AWS Cost Optimization Initiative (2025)
- Situation: AWS cloud costs were escalating with underutilized infrastructure across multiple environments. The team needed to optimize costs while maintaining service reliability and performance.
- Complication: Cost optimization required careful analysis to avoid impacting production systems. We needed to identify unused resources, right-size instances based on actual usage, and consolidate environments where possible.
- Actions: I analyzed resource usage with Dynatrace to understand actual utilization patterns. I led a two-phase cost optimization initiative:
- Phase 1: Merged DEV environment into TEST/EDU, shutting down 2 DEV Windows servers and 1 RDS database instance
- Phase 2: Right-sized TEST VMs from m5.2xlarge (8 vCPU, 32 GiB) to m7i-flex.xlarge (4 vCPU, 16 GiB) based on actual usage data
- Documented migration procedures with rollback instructions
- Created cost calculation procedures for ongoing visibility
- Result: Reduced AWS monthly costs by €5,200+ while maintaining performance. The newer generation hardware (m7i-flex) provided better price-performance ratio. The initiative demonstrated data-driven infrastructure decisions and established methodology for future cost optimization.
- What I learned: Usage analysis with monitoring tools provides objective data for infrastructure decisions. Right-sizing based on actual usage rather than assumptions can achieve significant cost savings without performance impact. Documenting the process enables repeatable optimization initiatives.