<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://mw.hh.se/caisr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Saesha</id>
	<title>ISLAB/CAISR - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://mw.hh.se/caisr/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Saesha"/>
	<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Special:Contributions/Saesha"/>
	<updated>2026-04-04T10:24:22Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.13</generator>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Conversational_AI_for_Reliable_Insights_from_Industrial_Telemetry_(with_Alfa_Laval)&amp;diff=5672</id>
		<title>Conversational AI for Reliable Insights from Industrial Telemetry (with Alfa Laval)</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Conversational_AI_for_Reliable_Insights_from_Industrial_Telemetry_(with_Alfa_Laval)&amp;diff=5672"/>
		<updated>2025-11-26T17:20:49Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=collaborate with Alfa Laval (a leading national and global company); Conversational AI for industrial telemetry, combining language models with numerical data and documentation to deliver reliable, explainable insights on machine status and performance.&lt;br /&gt;
|TimeFrame=Spring 2025&lt;br /&gt;
|Prerequisites=Good knowledge of machine learning&lt;br /&gt;
|Supervisor=Mahmoud Rahat, Saeed Gholami Shahbandi&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
= Conversational AI for Reliable Insights from Industrial Telemetry =&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Opportunity to collaborate with Alfa Laval, a leading national and global company.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
This project explores conversational AI for industrial telemetry: enabling a system to answer questions about recent operations, KPIs, diagnostics, prognostics, and root cause analysis across machine --&amp;gt; vessel --&amp;gt; fleet. The work fuses language models with structured numerical time series and authoritative technical documents to produce grounded, citeable answers rather than freehand text. Emphasis is on numerical reliability, clear provenance, and explainability in real-world settings.&lt;br /&gt;
&lt;br /&gt;
Using real industrial datasets, in collaboration with Alfa Laval, the project investigates how to format telemetry for interpretability (unit-aware schemas, anomaly-preserving rollups), how to provide essential context (temporal windows, baselines, thresholds), and how to ground answers (retrieval, secure tool-use, validators) to reduce hallucination and improve factuality. A shared reference approach and evaluation protocol will guide several focused thesis projects that contribute complementary components within this overall scope.&lt;br /&gt;
&lt;br /&gt;
== Prospective thesis project topics ==&lt;br /&gt;
&lt;br /&gt;
=== Tool-Augmented Telemetry Reasoning for Conversational Interfaces ===&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Objective&amp;#039;&amp;#039;: Ensure numerically correct, citable answers by computing first (secure tools), validating (units/ranges/baselines), then citing results in chat.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Scope&amp;#039;&amp;#039;: Planner vs. retrieval-first policies; telemetry stats (windowed means, anomalies, CIs); RBAC; evidence citations; optional RAG for limits/specs.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Expected outcome&amp;#039;&amp;#039;: Reference agent + validator toolkit; policy benchmark for factuality vs. latency.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;RQs&amp;#039;&amp;#039;: Which tool policy best balances factuality and latency? Which validators (units/thresholds/temporal baselines) cut numeric errors most?&lt;br /&gt;
&lt;br /&gt;
=== Schema- &amp;amp; Context-Optimized Telemetry Formats for LLM Numeric Reasoning ===&lt;br /&gt;
&amp;#039;&amp;#039;Objective&amp;#039;&amp;#039;: Improve interpretability via unit-aware “stat-cards” and explicit temporal context (now/Δ/rolling baseline) with lightweight prompting or minimal tuning.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Scope&amp;#039;&amp;#039;: Schema variants (JSONL, compact tables, key-value with units/limits); anomaly-preserving rollups; prompt templates vs. small instruction-tunes.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Expected outcome&amp;#039;&amp;#039;: Schema guidelines + converters + prompt library; ablations of context components vs. accuracy/latency.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;RQs&amp;#039;&amp;#039;: Which schema+context combo maximizes numeric correctness per token? How much do spec guardrails and temporal packaging reduce errors?&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=5608</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=5608"/>
		<updated>2025-10-22T12:33:50Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Email=saeed.gholami.shahbandi@hh.se&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|url=https://www.linkedin.com/in/saeedghsh/&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantic Mapping&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
[[Category:staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=5607</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=5607"/>
		<updated>2025-10-22T12:32:40Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Email=saeed.gholami.shahbandi@hh.se&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantic Mapping&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
[[Category:staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Conversational_AI_for_Reliable_Insights_from_Industrial_Telemetry_(with_Alfa_Laval)&amp;diff=5601</id>
		<title>Conversational AI for Reliable Insights from Industrial Telemetry (with Alfa Laval)</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Conversational_AI_for_Reliable_Insights_from_Industrial_Telemetry_(with_Alfa_Laval)&amp;diff=5601"/>
		<updated>2025-10-21T08:47:25Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=collaborate with Alfa Laval (a leading national and global company); Conversational AI for industrial telemetry, combining language models with numerical data and documentation to deliver reliable, explainable insights on machine status and performance.&lt;br /&gt;
|TimeFrame=Spring 2025&lt;br /&gt;
|Prerequisites=Good knowledge of machine learning&lt;br /&gt;
|Supervisor=Mahmoud Rahat, Saeed Gholami Shahbandi&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
= Conversational AI for Reliable Insights from Industrial Telemetry =&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Opportunity to collaborate with Alfa Laval, a leading national and global company.&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
This project explores conversational AI for industrial telemetry: enabling a system to answer questions about recent operations, KPIs, diagnostics, prognostics, and root cause analysis across machine --&amp;gt; vessel --&amp;gt; fleet. The work fuses language models with structured numerical time series and authoritative technical documents to produce grounded, citeable answers rather than freehand text. Emphasis is on numerical reliability, clear provenance, and explainability in real-world settings.&lt;br /&gt;
&lt;br /&gt;
Using real industrial datasets, in collaboration with Alfa Laval, the project investigates how to format telemetry for interpretability (unit-aware schemas, anomaly-preserving rollups), how to provide essential context (temporal windows, baselines, thresholds), and how to ground answers (retrieval, secure tool-use, validators) to reduce hallucination and improve factuality. A shared reference approach and evaluation protocol will guide several focused thesis projects that contribute complementary components within this overall scope.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Conversational_AI_for_Reliable_Insights_from_Industrial_Telemetry_(with_Alfa_Laval)&amp;diff=5600</id>
		<title>Conversational AI for Reliable Insights from Industrial Telemetry (with Alfa Laval)</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Conversational_AI_for_Reliable_Insights_from_Industrial_Telemetry_(with_Alfa_Laval)&amp;diff=5600"/>
		<updated>2025-10-21T08:23:55Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Conversational AI for industrial telemetry, combining language models with numerical data and documentation to deliver reliable, explainable insights on machine status and performance.&lt;br /&gt;
|TimeFrame=Spring 2025&lt;br /&gt;
|Prerequisites=Good knowledge of machine learning&lt;br /&gt;
|Supervisor=Mahmoud Rahat, Saeed Gholami Shahbandi&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
= Conversational AI for Reliable Insights from Industrial Telemetry =&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
This project explores conversational AI for industrial telemetry: enabling a system to answer questions about recent operations, KPIs, diagnostics, prognostics, and root cause analysis across machine --&amp;gt; vessel --&amp;gt; fleet. The work fuses language models with structured numerical time series and authoritative technical documents to produce grounded, citeable answers rather than freehand text. Emphasis is on numerical reliability, clear provenance, and explainability in real-world settings.&lt;br /&gt;
&lt;br /&gt;
Using real industrial datasets, in collaboration with Alfa Laval, the project investigates how to format telemetry for interpretability (unit-aware schemas, anomaly-preserving rollups), how to provide essential context (temporal windows, baselines, thresholds), and how to ground answers (retrieval, secure tool-use, validators) to reduce hallucination and improve factuality. A shared reference approach and evaluation protocol will guide several focused thesis projects that contribute complementary components within this overall scope.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Conversational_AI_for_Reliable_Insights_from_Industrial_Telemetry_(with_Alfa_Laval)&amp;diff=5599</id>
		<title>Conversational AI for Reliable Insights from Industrial Telemetry (with Alfa Laval)</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Conversational_AI_for_Reliable_Insights_from_Industrial_Telemetry_(with_Alfa_Laval)&amp;diff=5599"/>
		<updated>2025-10-21T08:22:27Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Conversational AI for industrial telemetry, combining language models with numerical data and documentation to deliver reliable, explainable insights on machine status and performance.&lt;br /&gt;
|TimeFrame=Spring 2025&lt;br /&gt;
|Supervisor=Saeed Gholami Shahbandi, Mahmoud Rahat&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
= Conversational AI for Reliable Insights from Industrial Telemetry =&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
This project explores conversational AI for industrial telemetry: enabling a system to answer questions about recent operations, KPIs, diagnostics, prognostics, and root cause analysis across machine --&amp;gt; vessel --&amp;gt; fleet. The work fuses language models with structured numerical time series and authoritative technical documents to produce grounded, citeable answers rather than freehand text. Emphasis is on numerical reliability, clear provenance, and explainability in real-world settings.&lt;br /&gt;
&lt;br /&gt;
Using real industrial datasets, in collaboration with Alfa Laval, the project investigates how to format telemetry for interpretability (unit-aware schemas, anomaly-preserving rollups), how to provide essential context (temporal windows, baselines, thresholds), and how to ground answers (retrieval, secure tool-use, validators) to reduce hallucination and improve factuality. A shared reference approach and evaluation protocol will guide several focused thesis projects that contribute complementary components within this overall scope.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Conversational_AI_for_Reliable_Insights_from_Industrial_Telemetry_(with_Alfa_Laval)&amp;diff=5598</id>
		<title>Conversational AI for Reliable Insights from Industrial Telemetry (with Alfa Laval)</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Conversational_AI_for_Reliable_Insights_from_Industrial_Telemetry_(with_Alfa_Laval)&amp;diff=5598"/>
		<updated>2025-10-21T08:20:56Z</updated>

		<summary type="html">&lt;p&gt;Saesha: Created page with &amp;quot;{{StudentProjectTemplate |Summary=This project investigates how conversational AI can be combined with numerical telemetry and technical documentation to provide reliable, exp...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=This project investigates how conversational AI can be combined with numerical telemetry and technical documentation to provide reliable, explainable answers about machine status, performance, and maintenance. The focus is on grounding language models in real industrial data to support trustworthy insights across machine, vessel, and fleet levels.&lt;br /&gt;
|TimeFrame=Spring 2025&lt;br /&gt;
|Supervisor=Saeed Gholami Shahbandi, Mahmoud Rahat&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
= Conversational AI for Reliable Insights from Industrial Telemetry =&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
This project explores conversational AI for industrial telemetry: enabling a system to answer questions about recent operations, KPIs, diagnostics, prognostics, and root cause analysis across machine --&amp;gt; vessel --&amp;gt; fleet. The work fuses language models with structured numerical time series and authoritative technical documents to produce grounded, citeable answers rather than freehand text. Emphasis is on numerical reliability, clear provenance, and explainability in real-world settings.&lt;br /&gt;
&lt;br /&gt;
Using real industrial datasets, in collaboration with Alfa Laval, the project investigates how to format telemetry for interpretability (unit-aware schemas, anomaly-preserving rollups), how to provide essential context (temporal windows, baselines, thresholds), and how to ground answers (retrieval, secure tool-use, validators) to reduce hallucination and improve factuality. A shared reference approach and evaluation protocol will guide several focused thesis projects that contribute complementary components within this overall scope.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Workshop_Automation_together_with_Volvo_Group&amp;diff=5595</id>
		<title>Workshop Automation together with Volvo Group</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Workshop_Automation_together_with_Volvo_Group&amp;diff=5595"/>
		<updated>2025-10-20T07:31:43Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=This project, in collaboration with Volvo Group, investigates how automation can raise workshop throughput, repair quality, and technician experience through data-driven perception and pragmatic use of automation.&lt;br /&gt;
|TimeFrame=Fall 2025&lt;br /&gt;
|Prerequisites=Good knowledge of machine learning &amp;amp; robotics&lt;br /&gt;
|Supervisor=Sławomir Nowaczyk, Saeed Gholami Shahbandi&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
= Next-generation Orchestrated Workshop Automation (NOWA) =&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
&lt;br /&gt;
Next-generation Orchestrated Workshop Automation (NOWA) is a pre-study where Volvo and the CAISR group at Halmstad University investigate how automation can raise workshop throughput, repair quality and technician experience through data-driven perception and pragmatic use of automation. We will demonstrate five focused showcases that create immediate value while de-risking future robotics: (1) camera-based visual checks for component inspection; (2) an LLM-supported check-in that structures driver symptom descriptions ; (3) engine idle-sound anomaly pre-screening; (4) a mobile parts &amp;amp; tool runner to reduce technician “walking waste”; and (5) robotic support for oil change operation. Each pilot will be co-designed with technicians, instrumented with clear KPIs, and packaged with site-agnostic Standard Operating Procedures—forming a scalable pathway from today’s assisted workflows to tomorrow’s robot-enabled workshop automation.&lt;br /&gt;
&lt;br /&gt;
Since the five showcases are at different levels of maturity, the expected results will also vary, from the practical demonstration of the quantifiable benefits of robot-guided cameras over existing manual procedures for (1) to requirements specification and initial cost-benefit analysis for (5). Overall, however, NOWA directly advances the call’s focus on productivity, sustainability, and human-centred digitalisation in the heavy-vehicle aftermarket.&lt;br /&gt;
&lt;br /&gt;
This pre-study couples rigorous analysis of existing multimodal data (images and idle-sound) with pragmatic automation showcases (LLM-assisted check-in and robotic tool/parts runner) to deliver near-term, measurable value while laying a scalable foundation for future robotic inspection.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Prospective thesis project topics ==&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;PLEASE NOTE&amp;#039;&amp;#039;&amp;#039;: project assignments will be finalized based on the number of students/groups, Volvo priorities, and NOWA planning. You are not guaranteed an exact 1-to-1 match with any single showcase; your thesis may combine elements across the four themes. We will aim to align your project with your interests while meeting project constraints.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Visual Checks: From Ad-Hoc Photos to Diagnostic Visual Protocols ===&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Objective&amp;#039;&amp;#039;: Define site-agnostic image-capture and quality-assessment protocols that turn workshop photos into diagnostically useful evidence for routine visual checks, without increasing technician burden. The protocol is a step toward workshop automation where future robot systems perform consistent capture; the thesis can explore what requirements and constraints this implies for robot-mounted cameras and paths.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Scope&amp;#039;&amp;#039;: Audit existing image data, specify minimal quality/completeness metrics (e.g., sharpness, exposure, key-region&lt;br /&gt;
visibility), and draft capture Standard operating procedures (SOP) for both human-held and robot-mounted cameras (manipulator or mobile base), validating via&lt;br /&gt;
light bench tests or synthetic studies rather than full implementation.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;Deliveries&amp;#039;&amp;#039;:&lt;br /&gt;
** KPI and metric definitions tied to &amp;quot;diagnostically useful&amp;quot; images.&lt;br /&gt;
** Data audit plan and curated exemplars for target components.&lt;br /&gt;
** Draft Standard operating procedures (SOP) for capture and review checklists; risk &amp;amp; privacy note.&lt;br /&gt;
** Baseline modeling/evaluation plan (ranking/filtering) with simulated ablations to link quality factors to expected diagnostic value.&lt;br /&gt;
** Go/hold decision criteria, Technology Readiness Level (TRL) and Return on Investment (ROI) roadmap outline.&lt;br /&gt;
&lt;br /&gt;
*&amp;#039;&amp;#039;Research Questions&amp;#039;&amp;#039;:&lt;br /&gt;
** Which minimal, site-agnostic image-quality and completeness metrics best predict diagnostic usefulness for routine visual checks?&lt;br /&gt;
** Which capture protocol elements (angles, lighting, standoff) most affect those metrics in workshops?&lt;br /&gt;
&lt;br /&gt;
=== LLM-Assisted Check-In and Handover: Structure, Safety, and UX ===&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Objective&amp;#039;&amp;#039;: Design and evaluate a lightweight, human-in-the-loop LLM agent that structures symptom capture, supports planning/verification of required resets, and generates technician-ready handovers—improving reception flow while safeguarding inclusivity and transparency.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Scope&amp;#039;&amp;#039;: Service blueprinting, Wizard-of-Oz trials, prompt/flow design with guardrails, and inclusive language checks; define structured fields and interoperability stubs without building full back-end integrations. Evaluate with time–motion baselines and scenario tests.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;Deliveries&amp;#039;&amp;#039;:&lt;br /&gt;
** Reception blueprint, field schema, and UX wireframes.&lt;br /&gt;
** Prompt/flow library with safety guardrails and fallback rules.&lt;br /&gt;
** Evaluation plan for completeness and dwell-time effects; risk &amp;amp; data-protection note.&lt;br /&gt;
** Standard operating procedures (SOP) excerpts for reception/hand-over; indicative Technology Readiness Level (TRL) and Return on Investment (ROI) pathway.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;Research Question(s)&amp;#039;&amp;#039;:&lt;br /&gt;
** What minimal set of structured prompts and agent behaviors achieves high symptom-capture completeness while maintaining or reducing driver dwell time at reception?&lt;br /&gt;
** How should explainability and escalation be designed so technicians trust the agent’s outputs in a noisy, multi-lingual environment?&lt;br /&gt;
&lt;br /&gt;
=== Idle Sound &amp;amp; Vibration Pre-Screening: Signals for Early Triage ===&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Objective&amp;#039;&amp;#039;: Specify a generic protocol and analysis plan for using idle-engine sound and basic vibration sensing to pre-screen for anomalies, focusing on robust capture, feature extraction, and triage value rather than model optimization.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Scope&amp;#039;&amp;#039;: Sensor/capture guidelines (microphone/accelerometer placement, duration, environment), privacy and safety assessment,&lt;br /&gt;
baseline feature bank (time/frequency), and a simulation plan showing how pre-screening could influence triage and Mean Time to Repair (MTTR) under conservative assumptions. Bench tests preferred over field deployments at this stage. If field access is limited, prioritize alternative data sources: controlled bench recordings on campus rigs or donor vehicles, short capture campaigns with university fleet or partner garages, augmentation and synthesis for baseline method validation, and selective use of public datasets where task-aligned.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;Deliveries&amp;#039;&amp;#039;:&lt;br /&gt;
** Capture Standard operating procedures (SOP); idle conditions, noise controls, metadata.&lt;br /&gt;
** Feature-extraction and labeling plan; small controlled test design.&lt;br /&gt;
** Discrete-event &amp;quot;what-if&amp;quot; simulation spec connecting alerts to workflow outcomes; risk register.&lt;br /&gt;
** Criteria for go/hold and Technology Readiness Level (TRL) and Return on Investment (ROI) progression.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;Research Question(s)&amp;#039;&amp;#039;:&lt;br /&gt;
** Which capture and feature combinations are likely to yield robust anomaly pre-screening across heterogeneous workshop acoustics?&lt;br /&gt;
** What is the expected effect of pre-screening alerts on triage queues and Mean Time to Repair (MTTR) in simulated reception workflows?&lt;br /&gt;
&lt;br /&gt;
=== Mobile Parts &amp;amp; Tool Runner: Logistics Blueprint and Impact Estimation ===&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Objective&amp;#039;&amp;#039;: Develop a minimal, technology-agnostic blueprint for a mobile parts/tool runner (human-operated cart today, robot-ready tomorrow) that reduces technician &amp;quot;walking waste&amp;quot; and time-to-first-wrench, including dispatch rules, location strategies, and safety/ergonomics considerations.&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Scope&amp;#039;&amp;#039;: Process maps, layout assumptions, pick/pack/dispatch logic, and discrete-event simulations to estimate effects on utilization and rework; evaluate feasibility and integration constraints without building hardware. Co-design with technicians to ensure acceptance. Primary evaluation will be via discrete-event simulation; optionally validate selected micro-flows on available platforms (e.g., Baxter for handoff and staging tasks, Robotnik mobile base for point-to-point parts delivery) to de-risk assumptions about dispatch latency and human-robot interaction.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;Deliveries&amp;#039;&amp;#039;:&lt;br /&gt;
** Service blueprint and SOP draft (request -&amp;gt; pick -&amp;gt; deliver -&amp;gt; confirm).&lt;br /&gt;
** Simulation plan with baseline time–motion data and KPI definitions.&lt;br /&gt;
** Preliminary cost–benefit model and Technology Readiness Level (TRL) and Return on Investment (ROI) roadmap; change-management notes.&lt;br /&gt;
** Safety/ergonomics and equality considerations for roles and workflows.&lt;br /&gt;
&lt;br /&gt;
* &amp;#039;&amp;#039;Research Question(s)&amp;#039;&amp;#039;:&lt;br /&gt;
** Which dispatch and localization strategies (e.g., zone-based, on-demand batching) minimize technician walking time without disrupting safety and flow?&lt;br /&gt;
** Under conservative assumptions, what reduction in time-to-first-wrench is achievable in simulation for common job families?&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=5593</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=5593"/>
		<updated>2025-10-17T12:23:14Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantic Mapping&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
[[Category:staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3938</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3938"/>
		<updated>2018-06-27T17:33:40Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|url=http://saeed.im&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantic Mapping&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
I studied Electrical Engineering at University of Mazandaran in Iran. I accomplished my BSc studies in “electronics” and “digital design” by “implementation of a convolutional decoder on FPGA”. Following my education I attended a robotic master program (ASP) in Ecole Central de Nantes in France. I participated in Cart-O-Matic robotic group in University of Angers (ISTIA). I joined CAISR at Halmstad University in 2012, working in the AIMS project under supervision of Prof. Verikas, Dr. Åstrand and Dr. Philippsen. My contribution to the project is mainly focused on map analysis and semantic annotation (e.g. structural labels as corridors or local label such as pillars and pallet cells). The objective is to increase the support for awareness of lift-trucks (Automated Guided Vehicles; AGVs) by providing them with an understanding and knowledge of their surrounding environment. My main interests lie in robotics, computer vision and machine learning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
[[Category:alumni]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3862</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3862"/>
		<updated>2018-02-11T10:17:43Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Title=Licentiate&lt;br /&gt;
|Position=PhD Candidate&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|Office=E522&lt;br /&gt;
|url=http://saeed.im&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantic Mapping&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
I studied Electrical Engineering at University of Mazandaran in Iran. I accomplished my BSc studies in “electronics” and “digital design” by “implementation of a convolutional decoder on FPGA”. Following my education I attended a robotic master program (ASP) in Ecole Central de Nantes in France. I participated in Cart-O-Matic robotic group in University of Angers (ISTIA). I joined CAISR at Halmstad University in 2012, working in the AIMS project under supervision of Prof. Verikas, Dr. Åstrand and Dr. Philippsen. My contribution to the project is mainly focused on map analysis and semantic annotation (e.g. structural labels as corridors or local label such as pillars and pallet cells). The objective is to increase the support for awareness of lift-trucks (Automated Guided Vehicles; AGVs) by providing them with an understanding and knowledge of their surrounding environment. My main interests lie in robotics, computer vision and machine learning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
[[Category:Staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3846</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3846"/>
		<updated>2018-01-13T12:40:48Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Title=Licentiate&lt;br /&gt;
|Position=PhD Candidate&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|Office=E522&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantic Mapping&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
I studied Electrical Engineering at University of Mazandaran in Iran. I accomplished my BSc studies in “electronics” and “digital design” by “implementation of a convolutional decoder on FPGA”. Following my education I attended a robotic master program (ASP) in Ecole Central de Nantes in France. I participated in Cart-O-Matic robotic group in University of Angers (ISTIA). I joined CAISR at Halmstad University in 2012, working in the AIMS project under supervision of Prof. Verikas, Dr. Åstrand and Dr. Philippsen. My contribution to the project is mainly focused on map analysis and semantic annotation (e.g. structural labels as corridors or local label such as pillars and pallet cells). The objective is to increase the support for awareness of lift-trucks (Automated Guided Vehicles; AGVs) by providing them with an understanding and knowledge of their surrounding environment. My main interests lie in robotics, computer vision and machine learning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
[[Category:Staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3845</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3845"/>
		<updated>2018-01-13T12:40:16Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Title=Licentiate&lt;br /&gt;
|Position=PhD Candidate&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|Office=E522&lt;br /&gt;
|url=saeed.im&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantic Mapping&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
I studied Electrical Engineering at University of Mazandaran in Iran. I accomplished my BSc studies in “electronics” and “digital design” by “implementation of a convolutional decoder on FPGA”. Following my education I attended a robotic master program (ASP) in Ecole Central de Nantes in France. I participated in Cart-O-Matic robotic group in University of Angers (ISTIA). I joined CAISR at Halmstad University in 2012, working in the AIMS project under supervision of Prof. Verikas, Dr. Åstrand and Dr. Philippsen. My contribution to the project is mainly focused on map analysis and semantic annotation (e.g. structural labels as corridors or local label such as pillars and pallet cells). The objective is to increase the support for awareness of lift-trucks (Automated Guided Vehicles; AGVs) by providing them with an understanding and knowledge of their surrounding environment. My main interests lie in robotics, computer vision and machine learning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
[[Category:Staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3844</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3844"/>
		<updated>2018-01-13T12:37:43Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Title=Licentiate&lt;br /&gt;
|Position=PhD Candidate&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|Office=E522&lt;br /&gt;
|url=saeed.im&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantic Mapping&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
I studied Electrical Engineering at University of Mazandaran in Iran. I accomplished my BSc studies in “electronics” and “digital design” by “implementation of a convolutional decoder on FPGA”. Following my education I attended a robotic master program (ASP) in Ecole Central de Nantes in France. I participated in Cart-O-Matic robotic group in University of Angers (ISTIA). I joined CAISR at Halmstad University in 2012, working in the AIMS project under supervision of Prof. Verikas, Dr. Åstrand and Dr. Philippsen. My contribution to the project is mainly focused on map analysis and semantic annotation (e.g. structural labels as corridors or local label such as pillars and pallet cells). The objective is to increase the support for awareness of lift-trucks (Automated Guided Vehicles; AGVs) by providing them with an understanding and knowledge of their surrounding environment. My main interests lie in robotics, computer vision and machine learning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
[[Category:Staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3843</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3843"/>
		<updated>2018-01-13T12:35:55Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Title=M.Sc&lt;br /&gt;
|Phone=+46-35-16-7537&lt;br /&gt;
|Cell Phone=+46-762868530&lt;br /&gt;
|Position=PhD Candidate&lt;br /&gt;
|Email=saesha@hh.se&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|Office=E522&lt;br /&gt;
|url=saeed.im&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantic Mapping&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
I studied Electrical Engineering at University of Mazandaran in Iran. I accomplished my BSc studies in “electronics” and “digital design” by “implementation of a convolutional decoder on FPGA”. Following my education I attended a robotic master program (ASP) in Ecole Central de Nantes in France. I participated in Cart-O-Matic robotic group in University of Angers (ISTIA). I joined CAISR at Halmstad University in 2012, working in the AIMS project under supervision of Prof. Verikas, Dr. Åstrand and Dr. Philippsen. My contribution to the project is mainly focused on map analysis and semantic annotation (e.g. structural labels as corridors or local label such as pillars and pallet cells). The objective is to increase the support for awareness of lift-trucks (Automated Guided Vehicles; AGVs) by providing them with an understanding and knowledge of their surrounding environment. My main interests lie in robotics, computer vision and machine learning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
[[Category:Staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3403</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3403"/>
		<updated>2017-01-12T06:55:52Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Title=M.Sc&lt;br /&gt;
|Phone=+46-35-16-7537&lt;br /&gt;
|Cell Phone=+46-762868530&lt;br /&gt;
|Position=PhD Candidate&lt;br /&gt;
|Email=saesha@hh.se&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|Office=E522&lt;br /&gt;
|url=http://se.linkedin.com/pub/saeed-gholami-shahbandi/41/365/4b0/&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantic Mapping&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
I studied Electrical Engineering at University of Mazandaran in Iran. I accomplished my BSc studies in “electronics” and “digital design” by “implementation of a convolutional decoder on FPGA”. Following my education I attended a robotic master program (ASP) in Ecole Central de Nantes in France. I participated in Cart-O-Matic robotic group in University of Angers (ISTIA). I joined CAISR at Halmstad University in 2012, working in the AIMS project under supervision of Prof. Verikas, Dr. Åstrand and Dr. Philippsen. My contribution to the project is mainly focused on map analysis and semantic annotation (e.g. structural labels as corridors or local label such as pillars and pallet cells). The objective is to increase the support for awareness of lift-trucks (Automated Guided Vehicles; AGVs) by providing them with an understanding and knowledge of their surrounding environment. My main interests lie in robotics, computer vision and machine learning.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
[[Category:Staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3402</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=3402"/>
		<updated>2017-01-04T20:56:24Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Title=M.Sc&lt;br /&gt;
|Phone=+46-35-16-7537&lt;br /&gt;
|Cell Phone=+46-762868530&lt;br /&gt;
|Position=PhD Candidate&lt;br /&gt;
|Email=saesha@hh.se&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|Office=E522&lt;br /&gt;
|url=http://se.linkedin.com/pub/saeed-gholami-shahbandi/41/365/4b0/&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantic Mapping&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
I studied Electrical Engineering at University of Mazandaran in Iran. Accomplished my BSc studies in “electronics” and “digital design” by “implementation of a convolutional decoder on FPGA”. Following my education I attended a robotic master program (ASP) in Ecole Central de Nantes in France. Participating in Cart-O-Matic robotic group in University of Angers (ISTIA). I joined CAISR at Halmstad University in 2012, working in the AIMS project under supervision of Prof. Verikas and Associate Prof. Åstrand. My contribution to the project is mainly focused on map analysis and semantic annotation (e.g. structural labels as corridors or local label such as pillars and pallet cells). The objective is to increase the support for awareness of lift-trucks (Automated Guided Vehicles; AGVs) by providing them with an understanding and knowledge on their surrounding environment. My main interests lie in robotics, computer vision and machine learning&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
[[Category:Staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Data_Mining_In_a_Warehouse_Inventory&amp;diff=3314</id>
		<title>Data Mining In a Warehouse Inventory</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Data_Mining_In_a_Warehouse_Inventory&amp;diff=3314"/>
		<updated>2016-10-26T10:45:33Z</updated>

		<summary type="html">&lt;p&gt;Saesha: Created page with &amp;quot;{{StudentProjectTemplate |Summary=A study of feature selection and distance measures for clustering big number of categories (&amp;gt;1000) and novelty detection in warehouse environ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=A study of feature selection and distance measures for clustering big number of categories (&amp;gt;1000) and novelty detection in warehouse environment.&lt;br /&gt;
|Programme=Mobile and Autonomous Systems&lt;br /&gt;
|Keywords=object recognition, signal processing, feature selection, unsupervised clustering, large scale many class classification, data mining.&lt;br /&gt;
|TimeFrame=Spring 2017&lt;br /&gt;
|References=Zeynep Akata, Florent Perronnin, Zaid Harchaoui, Cordelia Schmid.  Good Practice in Large-Scale  Learning  for  Image  Classi cation.   IEEE  Transactions  on  Pattern  Analysis  and  Machine Intelligence, Institute of Electrical and Electronics Engineers, 2014, 36 (3), pp.507-520.&amp;lt;10.1109/TPAMI.2013.146&amp;gt;.&amp;lt;hal-00835810&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Florent Perronnin, Zeynep Akata, Zaid Harchaoui, Cordelia Schmid.  Towards Good Practice in Large-Scale Learning for Image Classification.  CVPR 2012 - IEEE Computer Vision and Pattern  Recognition,  Jun  2012,  Providence  (RI),  United  States.   IEEE,  pp.3482-3489,  2012,&amp;lt;10.1109/CVPR.2012.6248090&amp;gt;.&amp;lt;hal-00690014&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Raphael  Puget,  Nicolas  Baskiotis,  Patrick  Gallinari.   Sequential  Dynamic  Classi cation  for Large Scale Multi-class Problems.  Extreme Classi cation Workshop at ICML, Jul 2015, Lille,France.  2015.&amp;lt;hal-01207428&amp;gt;&lt;br /&gt;
|Prerequisites=Programming skills, Machine Learning, Computer Vision, Data Mining.&lt;br /&gt;
|Supervisor=Saeed Gholami Shahbandi, Björn Åstrand, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
;Background&lt;br /&gt;
: Object recognition in problems entailing many classes is a challenging task. One example of such problems is the inventory list of warehouse. The inventory of typical warehouses often contain up to 10K different classes of objects. In this project we intend to develop inventory list maintanance method that is able to learn the number of classes of objects and train a classifier from the data. Towards this objective, we employ the background knowledge (e.g. from the Warehouse Management System - WMS) to constrain the complexity of the problem.&lt;br /&gt;
&lt;br /&gt;
;Objectives&lt;br /&gt;
: To develop an incremental clustering algorithm, that learns new classes of object through novelty detection. The background knowledge (e.g. WMS), which is an important source of information for constraining the problem, should be exploit towards a more robust system design.&lt;br /&gt;
&lt;br /&gt;
;Research Questions&lt;br /&gt;
: What is the optimal feature space and clustering technique for object identification in large-scale many classes? How to use background knowledge as clustering cues? How to employ novelty detection for learning new classes incrementally?&lt;br /&gt;
&lt;br /&gt;
;Setup&lt;br /&gt;
: dataset from a real-world warehouses.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Mining_For_Meanings_In_Robot_Maps&amp;diff=3313</id>
		<title>Mining For Meanings In Robot Maps</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Mining_For_Meanings_In_Robot_Maps&amp;diff=3313"/>
		<updated>2016-10-26T10:43:22Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=To build a hybrid map by augmenting the intrinsic kinematic model of a mobile robot to a spatial map, and semi-supervised learning of meanings towards self/situation awareness.&lt;br /&gt;
|Programme=Mobile and Autonomous Systems&lt;br /&gt;
|Keywords=robotics, mapping, semantic maps, unsupervised semantic mapping, data mining, kinematic model, situation-awareness.&lt;br /&gt;
|TimeFrame=Spring 2017&lt;br /&gt;
|References=Pronobis, Andrzej, and Rajesh PN Rao. &amp;quot;Learning Deep Generative Spatial Models for Mobile Robots.&amp;quot; arXiv preprint arXiv:1610.02627 (2016).&lt;br /&gt;
&lt;br /&gt;
Khalil, Wisama, and Etienne Dombre. Modeling, identification and control of robots. Butterworth-Heinemann, 2004.&lt;br /&gt;
&lt;br /&gt;
Shahbandi, Saeed Gholami, Björn Åstrand, and Roland Philippsen. &amp;quot;Semi-supervised semantic labeling of adaptive cell decomposition maps in well-structured environments.&amp;quot; Mobile Robots (ECMR), 2015 European Conference on. IEEE, 2015.&lt;br /&gt;
|Prerequisites=Programming (preferably C++ or Python), Machine Learning, Data Mining. Bonus: Mobile Robots (kinematic/dynamic modeling), ROS.&lt;br /&gt;
|Supervisor=Saeed Gholami Shahbandi, Björn Åstrand, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
;Background&lt;br /&gt;
: Each region of a spatial robot map potentially has a meaning (semantics). For instance the map of house could be segmented into kitchen, corridor, bedroom, etc. The fact that these meanings are generated from what humans understand of their surrounding, is crucial for a successful communication (e.g. in HRI). On the other hand a robot becomes more “situation-aware” by knowing the semantic of its surrounding. The aim of this project is to integrate the kinematic/dynamic model of the robot into the spatial map of the environment. And employ an unsupervised method to identify the semantics of the environment while the “self” of the robot is also reflected in the spatial map.&lt;br /&gt;
&lt;br /&gt;
;Objectives&lt;br /&gt;
: The expectation is to bridge a robot’s self-awareness (e.g. traversability of a path based on its intrinsic models and the terrain), to the situation-awareness that is supposedly capable of estimating future state of the situation.&lt;br /&gt;
&lt;br /&gt;
;Research Questions&lt;br /&gt;
: How to integrate the intrinsic kinematic/dynamic models of a mobile robot seamlessly into the spatial map of the environment? How to identify ego-centric meanings that emerge from the integration of spatial maps and robot’s kinematic/dynamic model?&lt;br /&gt;
&lt;br /&gt;
;Setup&lt;br /&gt;
: Simulation, and experimental results the lab environment.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Mining_For_Meanings_In_Robot_Maps&amp;diff=3312</id>
		<title>Mining For Meanings In Robot Maps</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Mining_For_Meanings_In_Robot_Maps&amp;diff=3312"/>
		<updated>2016-10-26T10:42:09Z</updated>

		<summary type="html">&lt;p&gt;Saesha: Created page with &amp;quot;{{StudentProjectTemplate |Summary=To build a hybrid map by augmenting the intrinsic kinematic model of a mobile robot to a spatial map, and semi-supervised learning of meaning...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=To build a hybrid map by augmenting the intrinsic kinematic model of a mobile robot to a spatial map, and semi-supervised learning of meanings towards self/situation awareness.&lt;br /&gt;
|Programme=Mobile and Autonomous Systems&lt;br /&gt;
|Keywords=robotics, mapping, semantic maps, unsupervised semantic mapping, data mining, kinematic model, situation-awareness.&lt;br /&gt;
|TimeFrame=Spring 2017&lt;br /&gt;
|References=Pronobis, Andrzej, and Rajesh PN Rao. &amp;quot;Learning Deep Generative Spatial Models for Mobile Robots.&amp;quot; arXiv preprint arXiv:1610.02627 (2016).&lt;br /&gt;
&lt;br /&gt;
Khalil, Wisama, and Etienne Dombre. Modeling, identification and control of robots. Butterworth-Heinemann, 2004.&lt;br /&gt;
&lt;br /&gt;
Shahbandi, Saeed Gholami, Björn Åstrand, and Roland Philippsen. &amp;quot;Semi-supervised semantic labeling of adaptive cell decomposition maps in well-structured environments.&amp;quot; Mobile Robots (ECMR), 2015 European Conference on. IEEE, 2015.&lt;br /&gt;
&lt;br /&gt;
|Prerequisites=Programming (preferably C++ or Python), Machine Learning, Data Mining. Bonus: Mobile Robots (kinematic/dynamic modeling), ROS.&lt;br /&gt;
|Supervisor=Saeed Gholami Shahbandi, bj&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Adaptive_warning_field_system&amp;diff=3311</id>
		<title>Adaptive warning field system</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Adaptive_warning_field_system&amp;diff=3311"/>
		<updated>2016-10-26T10:39:25Z</updated>

		<summary type="html">&lt;p&gt;Saesha: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Adaptive warning field system |Programme=Mobile and Autonomous Systems |Keywords=3D perception, mapping,  |TimeFrame=January 2017 until June ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Adaptive warning field system&lt;br /&gt;
|Programme=Mobile and Autonomous Systems&lt;br /&gt;
|Keywords=3D perception, mapping, &lt;br /&gt;
|TimeFrame=January 2017 until June 2017, with possible extension until September 2017&lt;br /&gt;
|References=SAS2-project, http://islab.hh.se/mediawiki/SAS2&lt;br /&gt;
ROS - Robot Operating System, http://www.ros.org/&lt;br /&gt;
OpenCv - http://opencv.org/ &lt;br /&gt;
&lt;br /&gt;
Nemati, Hassan, Åstrand, Björn (2014). Tracking of People in Paper Mill Warehouse Using Laser Range Sensor. 2014 UKSim-AMSS 8th European Modelling Symposium, EMS 2014, Pisa, Italy, 21-23 October, 2014.&lt;br /&gt;
&lt;br /&gt;
Power, P. Wayne, and Johann A. Schoonees. &amp;quot;Understanding background mixture models for foreground segmentation.&amp;quot; Proceedings image and vision computing New Zealand. Vol. 2002. 2002.&lt;br /&gt;
|Prerequisites=Image analysis, programming skills (preferably C++ or Python)&lt;br /&gt;
|Supervisor=Björn Åstrand, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
A central issue for robots and automated guided vehicles (AGVs) is safety; the robot or AGV must not harm humans or damage objects in the environment. Safety concerns have become more and more important as the use of AGVs has spread and advances in sensor technology, sensor integration, and object detection and avoidance have been more widely adapted. Today’s safety systems only work in 2D and consist of a static protection field (in size) and a speed adaptive warning field – higher speed - larger field. However, mostly the size of the warning field is hardcoded and heavily bounded to the AGV route. The setup of such system is costly and takes a long time to adjust for proper and efficient operation.&lt;br /&gt;
&lt;br /&gt;
The goal with this project [as a subset of SAS2 project] is to develop a safety system based on an adaptive warning field that autonomous learns the foreground (static and dynamic obstacles) and background model (static objects). The approach is to use 3D perception along with methods that combine a method that continuously segments background from foreground model (e.g. optical flow approach or Gaussian mixture models), with a method that uses a geometric map to filter out the foreground model. A challenge is how to update/learn the geometric map. &lt;br /&gt;
&lt;br /&gt;
Preferable the solutions are designed as ROS-packages (or, c++,  python, matlab -code).&lt;br /&gt;
&lt;br /&gt;
Resources: Facilities for data logging, cameras, depth sensor, data logging equipment, data set from warehouse and collaboration with industrial partners.&lt;br /&gt;
&lt;br /&gt;
RQ: How to automatically construct and represent a volume of interests for safe traverse of the driverless truck? What sensors to use and how to integrate information from different sources, e.g. other sensors and maps? How to detect the difference between foreground (static and dynamic obstacles) or background model (static objects).&lt;br /&gt;
&lt;br /&gt;
WP1: Literature review and construction of a dataset.&lt;br /&gt;
WP2: Develop methods for estimation of background /foreground model and an adaptive warning field system.&lt;br /&gt;
WP3: Evaluation of the feasibility of the system and comparison with existing systems.&lt;br /&gt;
WP4: [bonus] conference publication (ETFA, ECMR, TAROS)&lt;br /&gt;
&lt;br /&gt;
Deliverable: an implementation and demonstration of the developed system for adaptive warning field system using data acquired in a real warehouse or mine.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Model_behaviour_of_agents_in_a_warehouse_setting&amp;diff=3310</id>
		<title>Model behaviour of agents in a warehouse setting</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Model_behaviour_of_agents_in_a_warehouse_setting&amp;diff=3310"/>
		<updated>2016-10-26T10:36:45Z</updated>

		<summary type="html">&lt;p&gt;Saesha: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Model behaviour of agents in a warehouse setting |Programme=Mobile and Autonomous Systems |Keywords=Machine learning, simulation  |TimeFrame=...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Model behaviour of agents in a warehouse setting&lt;br /&gt;
|Programme=Mobile and Autonomous Systems&lt;br /&gt;
|Keywords=Machine learning, simulation &lt;br /&gt;
|TimeFrame=January 2017 until June 2017, with possible extension until September 2017&lt;br /&gt;
|References=SAS2-project, http://islab.hh.se/mediawiki/SAS2&lt;br /&gt;
ROS - Robot Operating System, http://www.ros.org/&lt;br /&gt;
OpenCv - http://opencv.org/ &lt;br /&gt;
&lt;br /&gt;
Lidström, Kristoffer, Situation-Aware vehicles – supporting the next generation of cooperative traffic system, PhD thesis, Örebro university, 2012.&lt;br /&gt;
&lt;br /&gt;
Lundström, Jens, Järpe, Eric &amp;amp; Verikas, Antanas, Detecting and exploring deviating behaviour of smart home residents, Expert systems with applications., 55, s. 429-440, 2016&lt;br /&gt;
|Prerequisites=Programming skills (preferably C++ or Python)&lt;br /&gt;
|Supervisor=Björn Åstrand, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
A central issue for robots and automated guided vehicles (AGVs) is safety; the robot or AGV must not harm humans or damage objects in the environment. Safety concerns have become more and more important as the use of AGVs has spread and advances in sensor technology, sensor integration, and object detection and avoidance have been more widely adapted. Today’s safety systems don’t consider the behaviour or the identity of different agents in close proximity to the robot and AGV.&lt;br /&gt;
 &lt;br /&gt;
The goal with this project [as a subset of SAS2 project] is to develop a method for model behaviour of different agents in a warehouse setting and thus use that for predict behaviour in different scenarios. The idea is to investigate if different agents can automatically divided into categories depending on their behaviour and how that information can be used to foresee actions of different agents.   &lt;br /&gt;
&lt;br /&gt;
The study also includes construction of a simulator where the validity of the developed method for behaviour modelling, is evaluated. Real data can also be used to verify the system. Preferable the solutions are designed as ROS-packages (or, c++,  python, matlab -code). &lt;br /&gt;
&lt;br /&gt;
Resources: Facilities for data logging, cameras, depth sensor, data logging equipment, data set from warehouse and collaboration with industrial partners.&lt;br /&gt;
&lt;br /&gt;
RQ: How to learn behaviour different categories of agents (e.g. manual driven trucks, humans, AGVs) especially if they they are only partially observed in time. How to represent behaviour of an agent? &lt;br /&gt;
&lt;br /&gt;
WP1: Literature review and construction of a dataset.&lt;br /&gt;
WP2: Develop methods for model behaviour of agents in a warehouse setting.&lt;br /&gt;
WP3: Comparison study and development of improvements of the different systems.&lt;br /&gt;
WP4: [bonus] conference publication (ETFA, ECMR, TAROS)&lt;br /&gt;
&lt;br /&gt;
Deliverable: an implementation and demonstration of the developed system for model behaviour of agents using simulated data and data acquired in a real warehouse or mine.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Dynamic_Objects_Detection_and_Tracking&amp;diff=3309</id>
		<title>Dynamic Objects Detection and Tracking</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Dynamic_Objects_Detection_and_Tracking&amp;diff=3309"/>
		<updated>2016-10-26T10:31:10Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Dynamic Objects Detection and Tracking in Warehouses, Using 3D Sensors.&lt;br /&gt;
|Programme=Mobile and Autonomous Systems&lt;br /&gt;
|Keywords=3D Sensor, Point Cloud, Obstacle Detection, Obstacle Tracking, Obstacle Avoidance.&lt;br /&gt;
|TimeFrame=January 2015 until June 2015,  with possible extension until September 2015.&lt;br /&gt;
|References=Petrovskaya, Anna, and Sebastian Thrun. &amp;quot;Model based vehicle detection and tracking for autonomous urban driving.&amp;quot; Autonomous Robots 26.2-3 (2009): 123-139.&lt;br /&gt;
&lt;br /&gt;
Wojke, N.; Haselich, M., &amp;quot;Moving vehicle detection and tracking in unstructured environments,&amp;quot; Robotics and Automation (ICRA), 2012 IEEE International Conference on , vol., no., pp.3082,3087, 14-18 May 2012.&lt;br /&gt;
&lt;br /&gt;
Moras, J.; Cherfaoui, V.; Bonnifait, P., &amp;quot;A lidar perception scheme for intelligent vehicle navigation,&amp;quot; Control Automation Robotics &amp;amp; Vision (ICARCV), 2010 11th International Conference on , vol., no., pp.1809,1814, 7-10 Dec. 2010&lt;br /&gt;
&lt;br /&gt;
Golovinskiy, Aleksey, Vladimir G. Kim, and Thomas Funkhouser. &amp;quot;Shape-based recognition of 3D point clouds in urban environments.&amp;quot; Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009.&lt;br /&gt;
&lt;br /&gt;
Granstrom, K.; Lundquist, C.; Gustafsson, F.; Orguner, U., &amp;quot;Random Set Methods: Estimation of Multiple Extended Objects,&amp;quot; Robotics &amp;amp; Automation Magazine, IEEE , vol.21, no.2, pp.73,82, June 2014&lt;br /&gt;
&lt;br /&gt;
Data Association and Tracking a survey  RoboEarth.&lt;br /&gt;
&lt;br /&gt;
Rusu, Radu Bogdan, and Steve Cousins. &amp;quot;3d is here: Point cloud library (pcl).&amp;quot; Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011.&lt;br /&gt;
&lt;br /&gt;
Brostow, Gabriel J., et al. &amp;quot;Segmentation and recognition using structure from motion point clouds.&amp;quot; Computer Vision–ECCV 2008. Springer Berlin Heidelberg, 2008. 44-57.&lt;br /&gt;
&lt;br /&gt;
Drost, Bertram, et al. &amp;quot;Model globally, match locally: Efficient and robust 3D object recognition.&amp;quot; Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010.&lt;br /&gt;
&lt;br /&gt;
Biasotti, S. ; Falcidieno, B. ; Giorgi, D. ; Spagnuolo, M. “Mathematical Tools for Shape Analysis and Description”, 2014, Publisher :Morgan &amp;amp; Claypool, Edition:1, ISBN:1627053646&lt;br /&gt;
&lt;br /&gt;
Börcs, Attila, et al. &amp;quot;A Model-based Approach for Fast Vehicle Detection in Continuously Streamed Urban LIDAR Point Clouds.&amp;quot; (2014).&lt;br /&gt;
|Prerequisites=Familiarity with filtering techniques (eg. EKF) for mobile robots localization, Image analysis, programming skills (preferably C++ or Python).&lt;br /&gt;
|Supervisor=Björn Åstrand, Saeed Gholami Shahbandi,&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
; Concise Description&lt;br /&gt;
: This project [as a subset of AIMS project], targets the automation of lift trucks in warehouse environments. Operating automatic guided vehicles in this particular environment is challenging due to the high expected throughput and consequently high traffic. Lift trucks are heavy vehicles operating with relatively high speed in an environment where neither the trucks, nor the humans are well protected as a regular urban traffic. This calls for a extra security measure and cautious decisions. Collision avoidance is an essential skill for mobile robots to guarantee a safe operation in a workspace shared with humans. This project focuses on detection and tracking of dynamic objects in order to avoid collision.&lt;br /&gt;
&lt;br /&gt;
; Objective&lt;br /&gt;
: To reliably detect, segment, and track dynamic objects (eg. humans and lift trucks) from a 3D point cloud, acquired by the means of a 3D sensor mounted on a mobile robot, in a highly structured environment (warehouse).&lt;br /&gt;
&lt;br /&gt;
; Research Questions&lt;br /&gt;
: What is the optimal sensor configuration to minimize the blind spots, data losses due to sensor deficiency, and consequently improving the detection accuracy?&lt;br /&gt;
: How to exploit the assumption of structured environment to improve tracking?&lt;br /&gt;
: How the background knowledge of agent types (humans, manually driven trucks and auto-guided trucks) and their behaviour models could improve the tracking?&lt;br /&gt;
&lt;br /&gt;
; Preliminary Plan&lt;br /&gt;
* startup: literature review and data acquisition&lt;br /&gt;
* point cloud manipulation, object segmentation, scene understanding.&lt;br /&gt;
* filtering and tracking.&lt;br /&gt;
* [bonus] object recognition&lt;br /&gt;
&lt;br /&gt;
;Deliverable&lt;br /&gt;
: An implementation and demonstration of the developed method for detection and tracking of the moving obstacle over the real data acquired in a real warehouse.&lt;br /&gt;
&lt;br /&gt;
;Bonus&lt;br /&gt;
: conference publication (ETFA, ECMR, TAROS)&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2155</id>
		<title>AIMS</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2155"/>
		<updated>2015-06-30T19:25:04Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ResearchProjInfo&lt;br /&gt;
|Title=AIMS&lt;br /&gt;
|ContactInformation=Björn Åstrand&lt;br /&gt;
|ShortDescription=Automatic Inventory and Mapping of Stock&lt;br /&gt;
|Description=The state of the art in autonomous robotics has advanced sufficiently that open implementations of many core technologies are now readily available. Consequently, there is growing research on the design and development of innovative solutions that leverage insights from several specialist domains. The AIMS project lies in this category. Its goal is to develop a system that seamlessly combines inventory management with autonomous forklift trucks in intelligent warehouses. Information compatible with human operators, management systems, as well as mobile robots is of particular importance here. A rich and ``life&amp;#039;&amp;#039; map combining metric and semantic information is a crucial ingredient for effective management of logistics and inventory, especially for autonomous fleets working in the same space as humans and human-operated devices.&lt;br /&gt;
|LogotypeFile=Procedure.png&lt;br /&gt;
|ProjectResponsible=Björn Åstrand&lt;br /&gt;
|ProjectDetailsPDF=CAISR Poster 2013 AIMS.pdf&lt;br /&gt;
|FundingMSEK=12&lt;br /&gt;
|ProjectStart=2012/01/01&lt;br /&gt;
|ProjectEnd=2016/01/01&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
|Lctitle=No&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=KOLLMORGEN&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=OPTRONIC&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=Toyota Material Handling Europe&lt;br /&gt;
}}&lt;br /&gt;
__NOTOC__ &lt;br /&gt;
{{ShowResearchProject}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGVs) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. This is an important step towards making robots more intelligent (situation awareness). The ability to structure and sort information provided by sensors increases the system’s flexibility and ability to adapt to new settings (which in the end means lower costs). To do this with as few input parameters as possible is also a challenge. The goal with this project is to develop a system for automatic inventory and mapping of goods in a warehouse setting and associate these with articles in the warehouse management system. Objects are augmented into a semantic map and can be both goods and objects belonging to the warehouse infrastructure, e.g. pallet rack and pallets. The semantic map can be further expanded with objects belonging to the building (doors, columns, etc.) and mobile objects that appear in the path of the robot (pedestrians, forklift trucks, etc.). The project thus also includes building a concept map of the area of operation&lt;br /&gt;
&lt;br /&gt;
This project is a collaboration between the CAISR, Kollmorgen, Optronic and Toyota Material Handling Europe.&lt;br /&gt;
&lt;br /&gt;
[[File:AIMSoverall.png|thumb|caption|&amp;quot;Overall of the AIMS project&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGV:s) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. The ability to structure and sort information provided by sensors increases the system&amp;#039;s flexibility and ability to adapt to new settings. The purpose of AIMS is to make autonomous systems and AGV:s operating in a warehouse setting more intelligent, by extending their functionality with a system for automatic inventory and mapping of goods. Achievement of this purpose requires:&lt;br /&gt;
* situation awareness: through different types of sensors, data fusion and employment of novel methods for interpretations of the information.&lt;br /&gt;
* maintaining practicability by means of flexibility and adaptability for handling variety of environments and sensor&amp;#039;s data.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
acquiring the skills of &amp;#039;&amp;#039;situation awareness&amp;#039;&amp;#039;, &amp;#039;&amp;#039;flexibility&amp;#039;&amp;#039; and &amp;#039;&amp;#039;adaptability&amp;#039;&amp;#039;, demands accomplishment in different disciplinary areas:&lt;br /&gt;
* &amp;#039;&amp;#039;Mapping and semantic annotation&amp;#039;&amp;#039;, both as a foundation of the semantic map for addressing articles and trucks in the environment, and to provide an automatic surveying and layout design for initial installation.&lt;br /&gt;
[[File:AIMSSemiSuprvised.png|thumb|caption|&amp;quot;Semi-supervised semantic annotation&amp;quot;]]&lt;br /&gt;
* &amp;#039;&amp;#039;Inventory list maintenance&amp;#039;&amp;#039;; a dynamic map maintenance approach in order to keep track of the inventory, linked with the warehouse management system.&lt;br /&gt;
[[File:Aims semantic.png|thumb|caption|&amp;quot;Inventory maping&amp;quot;]]&lt;br /&gt;
* &amp;#039;&amp;#039;3D Perception&amp;#039;&amp;#039;; serving the objectives of obstacle avoidance and articles&amp;#039; quantity estimation for inventory list.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Preliminary Results ==&lt;br /&gt;
&lt;br /&gt;
=== Semantic place categorization: ===&lt;br /&gt;
Cell Decomposition for geometrically segmenting and modelling the environment based on the latent structure of the environment. ICARCV14&lt;br /&gt;
and Landmark model-based representation TAROS14 &lt;br /&gt;
Semi-Supervised semantic annotations ECMR15&lt;br /&gt;
&lt;br /&gt;
=== 3D perception for obstacle avoidance ===&lt;br /&gt;
Klas Hedenberg&amp;#039;s work on 3D obstacle avoidance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== MSc Projects ===&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- {{ShowProjectPublications}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2154</id>
		<title>AIMS</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2154"/>
		<updated>2015-06-30T19:24:05Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ResearchProjInfo&lt;br /&gt;
|Title=AIMS&lt;br /&gt;
|ContactInformation=Björn Åstrand&lt;br /&gt;
|ShortDescription=Automatic Inventory and Mapping of Stock&lt;br /&gt;
|Description=The state of the art in autonomous robotics has advanced sufficiently that open implementations of many core technologies are now readily available. Consequently, there is growing research on the design and development of innovative solutions that leverage insights from several specialist domains. The AIMS project lies in this category. Its goal is to develop a system that seamlessly combines inventory management with autonomous forklift trucks in intelligent warehouses. Information compatible with human operators, management systems, as well as mobile robots is of particular importance here. A rich and ``life&amp;#039;&amp;#039; map combining metric and semantic information is a crucial ingredient for effective management of logistics and inventory, especially for autonomous fleets working in the same space as humans and human-operated devices.&lt;br /&gt;
|LogotypeFile=Procedure.png&lt;br /&gt;
|ProjectResponsible=Björn Åstrand&lt;br /&gt;
|ProjectDetailsPDF=CAISR Poster 2013 AIMS.pdf&lt;br /&gt;
|FundingMSEK=12&lt;br /&gt;
|ProjectStart=2012/01/01&lt;br /&gt;
|ProjectEnd=2016/01/01&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
|Lctitle=No&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=KOLLMORGEN&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=OPTRONIC&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=Toyota Material Handling Europe&lt;br /&gt;
}}&lt;br /&gt;
__NOTOC__ &lt;br /&gt;
{{ShowResearchProject}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGVs) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. This is an important step towards making robots more intelligent (situation awareness). The ability to structure and sort information provided by sensors increases the system’s flexibility and ability to adapt to new settings (which in the end means lower costs). To do this with as few input parameters as possible is also a challenge. The goal with this project is to develop a system for automatic inventory and mapping of goods in a warehouse setting and associate these with articles in the warehouse management system. Objects are augmented into a semantic map and can be both goods and objects belonging to the warehouse infrastructure, e.g. pallet rack and pallets. The semantic map can be further expanded with objects belonging to the building (doors, columns, etc.) and mobile objects that appear in the path of the robot (pedestrians, forklift trucks, etc.). The project thus also includes building a concept map of the area of operation&lt;br /&gt;
&lt;br /&gt;
This project is a collaboration between the CAISR, Kollmorgen, Optronic and Toyota Material Handling Europe.&lt;br /&gt;
&lt;br /&gt;
[[File:AIMSoverall.png|thumb|caption|&amp;quot;Overall of the AIMS project&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGV:s) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. The ability to structure and sort information provided by sensors increases the system&amp;#039;s flexibility and ability to adapt to new settings. The purpose of AIMS is to make autonomous systems and AGV:s operating in a warehouse setting more intelligent, by extending their functionality with a system for automatic inventory and mapping of goods. Achievement of this purpose requires:&lt;br /&gt;
* situation awareness: through different types of sensors, data fusion and employment of novel methods for interpretations of the information.&lt;br /&gt;
* maintaining practicability by means of flexibility and adaptability for handling variety of environments and sensor&amp;#039;s data.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
acquiring the skills of &amp;#039;&amp;#039;situation awareness&amp;#039;&amp;#039;, &amp;#039;&amp;#039;flexibility&amp;#039;&amp;#039; and &amp;#039;&amp;#039;adaptability&amp;#039;&amp;#039;, demands accomplishment in different disciplinary areas:&lt;br /&gt;
* &amp;#039;&amp;#039;Mapping and semantic annotation&amp;#039;&amp;#039;, both as a foundation of the semantic map for addressing articles and trucks in the environment, and to provide an automatic surveying and layout design for initial installation.&lt;br /&gt;
[[File:AIMSSemiSuprvised.png|thumb|caption|&amp;quot;Semi-supervised semantic annotation&amp;quot;]]&lt;br /&gt;
* &amp;#039;&amp;#039;Inventory list maintenance&amp;#039;&amp;#039;; a dynamic map maintenance approach in order to keep track of the inventory, linked with the warehouse management system.&lt;br /&gt;
[[File:Aims semantic.png|thumb|caption|&amp;quot;AIMS&amp;quot;]]&lt;br /&gt;
* &amp;#039;&amp;#039;3D Perception&amp;#039;&amp;#039;; serving the objectives of obstacle avoidance and articles&amp;#039; quantity estimation for inventory list.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Preliminary Results ==&lt;br /&gt;
&lt;br /&gt;
=== Semantic place categorization: ===&lt;br /&gt;
Cell Decomposition for geometrically segmenting and modelling the environment based on the latent structure of the environment. ICARCV14&lt;br /&gt;
and Landmark model-based representation TAROS14 &lt;br /&gt;
Semi-Supervised semantic annotations ECMR15&lt;br /&gt;
&lt;br /&gt;
=== 3D perception for obstacle avoidance ===&lt;br /&gt;
Klas Hedenberg&amp;#039;s work on 3D obstacle avoidance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== MSc Projects ===&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- {{ShowProjectPublications}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2153</id>
		<title>AIMS</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2153"/>
		<updated>2015-06-30T19:23:11Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ResearchProjInfo&lt;br /&gt;
|Title=AIMS&lt;br /&gt;
|ContactInformation=Björn Åstrand&lt;br /&gt;
|ShortDescription=Automatic Inventory and Mapping of Stock&lt;br /&gt;
|Description=The state of the art in autonomous robotics has advanced sufficiently that open implementations of many core technologies are now readily available. Consequently, there is growing research on the design and development of innovative solutions that leverage insights from several specialist domains. The AIMS project lies in this category. Its goal is to develop a system that seamlessly combines inventory management with autonomous forklift trucks in intelligent warehouses. Information compatible with human operators, management systems, as well as mobile robots is of particular importance here. A rich and ``life&amp;#039;&amp;#039; map combining metric and semantic information is a crucial ingredient for effective management of logistics and inventory, especially for autonomous fleets working in the same space as humans and human-operated devices.&lt;br /&gt;
|LogotypeFile=Procedure.png&lt;br /&gt;
|ProjectResponsible=Björn Åstrand&lt;br /&gt;
|ProjectDetailsPDF=CAISR Poster 2013 AIMS.pdf&lt;br /&gt;
|FundingMSEK=12&lt;br /&gt;
|ProjectStart=2012/01/01&lt;br /&gt;
|ProjectEnd=2016/01/01&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
|Lctitle=No&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=KOLLMORGEN&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=OPTRONIC&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=Toyota Material Handling Europe&lt;br /&gt;
}}&lt;br /&gt;
__NOTOC__ &lt;br /&gt;
{{ShowResearchProject}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGVs) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. This is an important step towards making robots more intelligent (situation awareness). The ability to structure and sort information provided by sensors increases the system’s flexibility and ability to adapt to new settings (which in the end means lower costs). To do this with as few input parameters as possible is also a challenge. The goal with this project is to develop a system for automatic inventory and mapping of goods in a warehouse setting and associate these with articles in the warehouse management system. Objects are augmented into a semantic map and can be both goods and objects belonging to the warehouse infrastructure, e.g. pallet rack and pallets. The semantic map can be further expanded with objects belonging to the building (doors, columns, etc.) and mobile objects that appear in the path of the robot (pedestrians, forklift trucks, etc.). The project thus also includes building a concept map of the area of operation&lt;br /&gt;
&lt;br /&gt;
This project is a collaboration between the CAISR, Kollmorgen, Optronic and Toyota Material Handling Europe.&lt;br /&gt;
&lt;br /&gt;
[[File:AIMSoverall.png|thumb|caption|&amp;quot;Overall of the AIMS project&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGV:s) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. The ability to structure and sort information provided by sensors increases the system&amp;#039;s flexibility and ability to adapt to new settings. The purpose of AIMS is to make autonomous systems and AGV:s operating in a warehouse setting more intelligent, by extending their functionality with a system for automatic inventory and mapping of goods. Achievement of this purpose requires:&lt;br /&gt;
* situation awareness: through different types of sensors, data fusion and employment of novel methods for interpretations of the information.&lt;br /&gt;
* maintaining practicability by means of flexibility and adaptability for handling variety of environments and sensor&amp;#039;s data.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
acquiring the skills of &amp;#039;&amp;#039;situation awareness&amp;#039;&amp;#039;, &amp;#039;&amp;#039;flexibility&amp;#039;&amp;#039; and &amp;#039;&amp;#039;adaptability&amp;#039;&amp;#039;, demands accomplishment in different disciplinary areas:&lt;br /&gt;
* &amp;#039;&amp;#039;Mapping and semantic annotation&amp;#039;&amp;#039;, both as a foundation of the semantic map for addressing articles and trucks in the environment, and to provide an automatic surveying and layout design for initial installation.&lt;br /&gt;
[[File:AIMSSemiSuprvised|thumb|caption|&amp;quot;Semi-supervised semantic annotation&amp;quot;]]&lt;br /&gt;
* &amp;#039;&amp;#039;Inventory list maintenance&amp;#039;&amp;#039;; a dynamic map maintenance approach in order to keep track of the inventory, linked with the warehouse management system.&lt;br /&gt;
[[File:Aims semantic.png|thumb|caption|&amp;quot;AIMS&amp;quot;]]&lt;br /&gt;
* &amp;#039;&amp;#039;3D Perception&amp;#039;&amp;#039;; serving the objectives of obstacle avoidance and articles&amp;#039; quantity estimation for inventory list.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Preliminary Results ==&lt;br /&gt;
&lt;br /&gt;
=== Semantic place categorization: ===&lt;br /&gt;
Cell Decomposition for geometrically segmenting and modelling the environment based on the latent structure of the environment. ICARCV14&lt;br /&gt;
and Landmark model-based representation TAROS14 &lt;br /&gt;
Semi-Supervised semantic annotations ECMR15&lt;br /&gt;
&lt;br /&gt;
=== 3D perception for obstacle avoidance ===&lt;br /&gt;
Klas Hedenberg&amp;#039;s work on 3D obstacle avoidance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== MSc Projects ===&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- {{ShowProjectPublications}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=File:AIMSSemiSuprvised.png&amp;diff=2152</id>
		<title>File:AIMSSemiSuprvised.png</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=File:AIMSSemiSuprvised.png&amp;diff=2152"/>
		<updated>2015-06-30T19:21:40Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2151</id>
		<title>AIMS</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2151"/>
		<updated>2015-06-30T19:16:03Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ResearchProjInfo&lt;br /&gt;
|Title=AIMS&lt;br /&gt;
|ContactInformation=Björn Åstrand&lt;br /&gt;
|ShortDescription=Automatic Inventory and Mapping of Stock&lt;br /&gt;
|Description=The state of the art in autonomous robotics has advanced sufficiently that open implementations of many core technologies are now readily available. Consequently, there is growing research on the design and development of innovative solutions that leverage insights from several specialist domains. The AIMS project lies in this category. Its goal is to develop a system that seamlessly combines inventory management with autonomous forklift trucks in intelligent warehouses. Information compatible with human operators, management systems, as well as mobile robots is of particular importance here. A rich and ``life&amp;#039;&amp;#039; map combining metric and semantic information is a crucial ingredient for effective management of logistics and inventory, especially for autonomous fleets working in the same space as humans and human-operated devices.&lt;br /&gt;
|LogotypeFile=Procedure.png&lt;br /&gt;
|ProjectResponsible=Björn Åstrand&lt;br /&gt;
|ProjectDetailsPDF=CAISR Poster 2013 AIMS.pdf&lt;br /&gt;
|FundingMSEK=12&lt;br /&gt;
|ProjectStart=2012/01/01&lt;br /&gt;
|ProjectEnd=2016/01/01&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
|Lctitle=No&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=KOLLMORGEN&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=OPTRONIC&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=Toyota Material Handling Europe&lt;br /&gt;
}}&lt;br /&gt;
[[File:Aims semantic.png|thumb|caption|&amp;quot;AIMS&amp;quot;]]&lt;br /&gt;
__NOTOC__ &lt;br /&gt;
{{ShowResearchProject}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGVs) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. This is an important step towards making robots more intelligent (situation awareness). The ability to structure and sort information provided by sensors increases the system’s flexibility and ability to adapt to new settings (which in the end means lower costs). To do this with as few input parameters as possible is also a challenge. The goal with this project is to develop a system for automatic inventory and mapping of goods in a warehouse setting and associate these with articles in the warehouse management system. Objects are augmented into a semantic map and can be both goods and objects belonging to the warehouse infrastructure, e.g. pallet rack and pallets. The semantic map can be further expanded with objects belonging to the building (doors, columns, etc.) and mobile objects that appear in the path of the robot (pedestrians, forklift trucks, etc.). The project thus also includes building a concept map of the area of operation&lt;br /&gt;
&lt;br /&gt;
This project is a collaboration between the CAISR, Kollmorgen, Optronic and Toyota Material Handling Europe.&lt;br /&gt;
&lt;br /&gt;
[[File:AIMSoverall.png|thumb|caption|&amp;quot;Overall of the AIMS project&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGV:s) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. The ability to structure and sort information provided by sensors increases the system&amp;#039;s flexibility and ability to adapt to new settings. The purpose of AIMS is to make autonomous systems and AGV:s operating in a warehouse setting more intelligent, by extending their functionality with a system for automatic inventory and mapping of goods. Achievement of this purpose requires:&lt;br /&gt;
* situation awareness: through different types of sensors, data fusion and employment of novel methods for interpretations of the information.&lt;br /&gt;
* maintaining practicability by means of flexibility and adaptability for handling variety of environments and sensor&amp;#039;s data.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
acquiring the skills of &amp;#039;&amp;#039;situation awareness&amp;#039;&amp;#039;, &amp;#039;&amp;#039;flexibility&amp;#039;&amp;#039; and &amp;#039;&amp;#039;adaptability&amp;#039;&amp;#039;, demands accomplishment in different disciplinary areas:&lt;br /&gt;
* &amp;#039;&amp;#039;Mapping and semantic annotation&amp;#039;&amp;#039;, both as a foundation of the semantic map for addressing articles and trucks in the environment, and to provide an automatic surveying and layout design for initial installation.&lt;br /&gt;
* &amp;#039;&amp;#039;Inventory list maintenance&amp;#039;&amp;#039;; a dynamic map maintenance approach in order to keep track of the inventory, linked with the warehouse management system.&lt;br /&gt;
* &amp;#039;&amp;#039;3D Perception&amp;#039;&amp;#039;; serving the objectives of obstacle avoidance and articles&amp;#039; quantity estimation for inventory list.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Preliminary Results ==&lt;br /&gt;
&lt;br /&gt;
=== Semantic place categorization: ===&lt;br /&gt;
Cell Decomposition for geometrically segmenting and modelling the environment based on the latent structure of the environment. ICARCV14&lt;br /&gt;
and Landmark model-based representation TAROS14 &lt;br /&gt;
Semi-Supervised semantic annotations ECMR15&lt;br /&gt;
&lt;br /&gt;
=== 3D perception for obstacle avoidance ===&lt;br /&gt;
Klas Hedenberg&amp;#039;s work on 3D obstacle avoidance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== MSc Projects ===&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- {{ShowProjectPublications}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=File:AIMSoverall.png&amp;diff=2150</id>
		<title>File:AIMSoverall.png</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=File:AIMSoverall.png&amp;diff=2150"/>
		<updated>2015-06-30T19:13:19Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=2149</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=2149"/>
		<updated>2015-06-30T17:03:31Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Title=M.Sc&lt;br /&gt;
|Phone=+46-35-16-7537&lt;br /&gt;
|Cell Phone=+46-762868530&lt;br /&gt;
|Position=PhD Candidate&lt;br /&gt;
|Email=saesha@hh.se&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|Office=E522&lt;br /&gt;
|url=http://se.linkedin.com/pub/saeed-gholami-shahbandi/41/365/4b0/&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantics and Awareness&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
&lt;br /&gt;
I studied Electrical Engineering at University of Mazandaran in Iran. Accomplished my BSc studies in “electronics” and “digital design” by “implementation of a convolutional decoder on FPGA”. Following my education I attended a robotic master program (ASP) in Ecole Central de Nantes in France. Participating in Cart-O-Matic robotic group in University of Angers (ISTIA). I joined CAISR at Halmstad University in 2012, working in the AIMS project under supervision of Prof. Verikas and Associate Prof. Åstrand. My contribution to the project is mainly focused on map analysis and semantic annotation (e.g. structural labels as corridors or local label such as pillars and pallet cells). The objective is to increase the support for awareness of lift-trucks (Automated Guided Vehicles; AGVs) by providing them with an understanding and knowledge on their surrounding environment. My main interests lie in robotics, computer vision and machine learning&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
[[Category:Staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2148</id>
		<title>AIMS</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2148"/>
		<updated>2015-06-30T16:47:34Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ResearchProjInfo&lt;br /&gt;
|Title=AIMS&lt;br /&gt;
|ContactInformation=Björn Åstrand&lt;br /&gt;
|ShortDescription=Automatic Inventory and Mapping of Stock&lt;br /&gt;
|Description=The state of the art in autonomous robotics has advanced sufficiently that open implementations of many core technologies are now readily available. Consequently, there is growing research on the design and development of innovative solutions that leverage insights from several specialist domains. The AIMS project lies in this category. Its goal is to develop a system that seamlessly combines inventory management with autonomous forklift trucks in intelligent warehouses. Information compatible with human operators, management systems, as well as mobile robots is of particular importance here. A rich and ``life&amp;#039;&amp;#039; map combining metric and semantic information is a crucial ingredient for effective management of logistics and inventory, especially for autonomous fleets working in the same space as humans and human-operated devices.&lt;br /&gt;
|LogotypeFile=Procedure.png&lt;br /&gt;
|ProjectResponsible=Björn Åstrand&lt;br /&gt;
|ProjectDetailsPDF=CAISR Poster 2013 AIMS.pdf&lt;br /&gt;
|FundingMSEK=12&lt;br /&gt;
|ProjectStart=2012/01/01&lt;br /&gt;
|ProjectEnd=2016/01/01&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
|Lctitle=No&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=KOLLMORGEN&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=OPTRONIC&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=Toyota Material Handling Europe&lt;br /&gt;
}}&lt;br /&gt;
[[File:Aims semantic.png|thumb|caption|&amp;quot;AIMS&amp;quot;]]&lt;br /&gt;
__NOTOC__ &lt;br /&gt;
{{ShowResearchProject}}&lt;br /&gt;
&lt;br /&gt;
== Introduction ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGVs) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. This is an important step towards making robots more intelligent (situation awareness). The ability to structure and sort information provided by sensors increases the system’s flexibility and ability to adapt to new settings (which in the end means lower costs). To do this with as few input parameters as possible is also a challenge. The goal with this project is to develop a system for automatic inventory and mapping of goods in a warehouse setting and associate these with articles in the warehouse management system. Objects are augmented into a semantic map and can be both goods and objects belonging to the warehouse infrastructure, e.g. pallet rack and pallets. The semantic map can be further expanded with objects belonging to the building (doors, columns, etc.) and mobile objects that appear in the path of the robot (pedestrians, forklift trucks, etc.). The project thus also includes building a concept map of the area of operation&lt;br /&gt;
&lt;br /&gt;
This project is a collaboration between the CAISR, Kollmorgen, Optronic and Toyota Material Handling Europe.&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGV:s) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. The ability to structure and sort information provided by sensors increases the system&amp;#039;s flexibility and ability to adapt to new settings. The purpose of AIMS is to make autonomous systems and AGV:s operating in a warehouse setting more intelligent, by extending their functionality with a system for automatic inventory and mapping of goods. Achievement of this purpose requires:&lt;br /&gt;
* situation awareness: through different types of sensors, data fusion and employment of novel methods for interpretations of the information.&lt;br /&gt;
* maintaining practicability by means of flexibility and adaptability for handling variety of environments and sensor&amp;#039;s data.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
acquiring the skills of &amp;#039;&amp;#039;situation awareness&amp;#039;&amp;#039;, &amp;#039;&amp;#039;flexibility&amp;#039;&amp;#039; and &amp;#039;&amp;#039;adaptability&amp;#039;&amp;#039;, demands accomplishment in different disciplinary areas:&lt;br /&gt;
* &amp;#039;&amp;#039;Mapping and semantic annotation&amp;#039;&amp;#039;, both as a foundation of the semantic map for addressing articles and trucks in the environment, and to provide an automatic surveying and layout design for initial installation.&lt;br /&gt;
* &amp;#039;&amp;#039;Inventory list maintenance&amp;#039;&amp;#039;; a dynamic map maintenance approach in order to keep track of the inventory, linked with the warehouse management system.&lt;br /&gt;
* &amp;#039;&amp;#039;3D Perception&amp;#039;&amp;#039;; serving the objectives of obstacle avoidance and articles&amp;#039; quantity estimation for inventory list.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--&lt;br /&gt;
== Preliminary Results ==&lt;br /&gt;
&lt;br /&gt;
=== Semantic place categorization: ===&lt;br /&gt;
Cell Decomposition for geometrically segmenting and modelling the environment based on the latent structure of the environment. ICARCV14&lt;br /&gt;
and Landmark model-based representation TAROS14 &lt;br /&gt;
Semi-Supervised semantic annotations ECMR15&lt;br /&gt;
&lt;br /&gt;
=== 3D perception for obstacle avoidance ===&lt;br /&gt;
Klas Hedenberg&amp;#039;s work on 3D obstacle avoidance.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== MSc Projects ===&lt;br /&gt;
&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- {{ShowProjectPublications}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2147</id>
		<title>AIMS</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2147"/>
		<updated>2015-06-30T16:27:16Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ResearchProjInfo&lt;br /&gt;
|Title=AIMS&lt;br /&gt;
|ContactInformation=Björn Åstrand&lt;br /&gt;
|ShortDescription=Automatic Inventory and Mapping of Stock&lt;br /&gt;
|Description=The state of the art in autonomous robotics has advanced sufficiently that open implementations of many core technologies are now readily available. Consequently, there is growing research on the design and development of innovative solutions that leverage insights from several specialist domains. The AIMS project lies in this category. Its goal is to develop a system that seamlessly combines inventory management with autonomous forklift trucks in intelligent warehouses. Information compatible with human operators, management systems, as well as mobile robots is of particular importance here. A rich and ``life&amp;#039;&amp;#039; map combining metric and semantic information is a crucial ingredient for effective management of logistics and inventory, especially for autonomous fleets working in the same space as humans and human-operated devices.&lt;br /&gt;
|LogotypeFile=Procedure.png&lt;br /&gt;
|ProjectResponsible=Björn Åstrand&lt;br /&gt;
|ProjectDetailsPDF=CAISR Poster 2013 AIMS.pdf&lt;br /&gt;
|FundingMSEK=12&lt;br /&gt;
|ProjectStart=2012/01/01&lt;br /&gt;
|ProjectEnd=2016/01/01&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
|Lctitle=No&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=KOLLMORGEN&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=OPTRONIC&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=Toyota Material Handling Europe&lt;br /&gt;
}}&lt;br /&gt;
[[File:Aims semantic.png|thumb|caption|&amp;quot;AIMS&amp;quot;]]&lt;br /&gt;
__NOTOC__ &lt;br /&gt;
{{ShowResearchProject}}&lt;br /&gt;
&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGVs) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. This is an important step towards making robots more intelligent (situation awareness). The ability to structure and sort information provided by sensors increases the system’s flexibility and ability to adapt to new settings (which in the end means lower costs). To do this with as few input parameters as possible is also a challenge. The goal with this project is to develop a system for automatic inventory and mapping of goods in a warehouse setting and associate these with articles in the warehouse management system. Objects are augmented into a semantic map and can be both goods and objects belonging to the warehouse infrastructure, e.g. pallet rack and pallets. The semantic map can be further expanded with objects belonging to the building (doors, columns, etc.) and mobile objects that appear in the path of the robot (pedestrians, forklift trucks, etc.). The project thus also includes building a concept map of the area of operation&lt;br /&gt;
&lt;br /&gt;
This project is a collaboration between the CAISR, Kollmorgen, Optronic and Toyota Material Handling Europe.&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGV:s) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. The ability to structure and sort information provided by sensors increases the system&amp;#039;s flexibility and ability to adapt to new settings. The purpose of AIMS is to make autonomous systems and AGV:s operating in a warehouse setting more intelligent, by extending their functionality with a system for automatic inventory and mapping of goods. Achievement of this purpose requires:&lt;br /&gt;
* situation awareness: through different types of sensors, data fusion and employment of novel methods for interpretations of the information.&lt;br /&gt;
* maintaining practicability by means of flexibility and adaptability for handling variety of environments and sensor&amp;#039;s data.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
acquiring the skills of &amp;#039;&amp;#039;situation awareness&amp;#039;&amp;#039;, &amp;#039;&amp;#039;flexibility&amp;#039;&amp;#039; and &amp;#039;&amp;#039;adaptability&amp;#039;&amp;#039;, demands accomplishment in different disciplinary areas:&lt;br /&gt;
* &amp;#039;&amp;#039;Mapping and semantic annotation&amp;#039;&amp;#039;, both as a foundation of the semantic map for addressing articles and trucks in the environment, and to provide an automatic surveying and layout design for initial installation.&lt;br /&gt;
* &amp;#039;&amp;#039;Inventory list maintenance&amp;#039;&amp;#039;; a dynamic map maintenance approach in order to keep track of the inventory, linked with the warehouse management system.&lt;br /&gt;
* &amp;#039;&amp;#039;3D Perception&amp;#039;&amp;#039;; serving the objectives of obstacle avoidance and articles&amp;#039; quantity estimation for inventory list.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- {{ShowProjectPublications}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2146</id>
		<title>AIMS</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2146"/>
		<updated>2015-06-30T16:20:39Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ResearchProjInfo&lt;br /&gt;
|Title=AIMS&lt;br /&gt;
|ContactInformation=Björn Åstrand&lt;br /&gt;
|ShortDescription=Automatic Inventory and Mapping of Stock&lt;br /&gt;
|Description=&lt;br /&gt;
&lt;br /&gt;
The state of the art in autonomous robotics has advanced sufficiently that open implementations of many core technologies are now readily available. Consequently, there is growing research on the design and development of innovative solutions that leverage insights from several specialist domains. The AIMS project lies in this category. Its goal is to develop a system that seamlessly combines inventory management with autonomous forklift trucks in intelligent warehouses. Information compatible with human operators, management systems, as well as mobile robots is of particular importance here. A rich and ``life&amp;#039;&amp;#039; map combining metric and semantic information is a crucial ingredient for effective management of logistics and inventory, especially for autonomous fleets working in the same space as humans and human-operated devices.&lt;br /&gt;
|LogotypeFile=Procedure.png&lt;br /&gt;
|ProjectResponsible=Björn Åstrand&lt;br /&gt;
|ProjectDetailsPDF=CAISR Poster 2013 AIMS.pdf&lt;br /&gt;
|FundingMSEK=12&lt;br /&gt;
|ProjectStart=2012/01/01&lt;br /&gt;
|ProjectEnd=2016/01/01&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
|Lctitle=No&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=KOLLMORGEN&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=OPTRONIC&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=Toyota Material Handling Europe&lt;br /&gt;
}}&lt;br /&gt;
[[File:Aims semantic.png|thumb|caption|&amp;quot;AIMS&amp;quot;]]&lt;br /&gt;
__NOTOC__ &lt;br /&gt;
{{ShowResearchProject}}&lt;br /&gt;
&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGVs) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. This is an important step towards making robots more intelligent (situation awareness). The ability to structure and sort information provided by sensors increases the system’s flexibility and ability to adapt to new settings (which in the end means lower costs). To do this with as few input parameters as possible is also a challenge. The goal with this project is to develop a system for automatic inventory and mapping of goods in a warehouse setting and associate these with articles in the warehouse management system. Objects are augmented into a semantic map and can be both goods and objects belonging to the warehouse infrastructure, e.g. pallet rack and pallets. The semantic map can be further expanded with objects belonging to the building (doors, columns, etc.) and mobile objects that appear in the path of the robot (pedestrians, forklift trucks, etc.). The project thus also includes building a concept map of the area of operation&lt;br /&gt;
&lt;br /&gt;
The project is a collaboration between the CAISR, Kollmorgen, Optronic, Toyota Material Handling Europe.&lt;br /&gt;
&lt;br /&gt;
== Motivations ==&lt;br /&gt;
An important skill for future robots and automated guided vehicles (AGV:s) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. The ability to structure and sort information provided by sensors increases the system&amp;#039;s flexibility and ability to adapt to new settings. The purpose of AIMS is to make autonomous systems and AGV:s operating in a warehouse setting more intelligent, by extending their functionality with a system for automatic inventory and mapping of goods. Achievement of this purpose requires:&lt;br /&gt;
* situation awareness: through different types of sensors, data fusion and employment of novel methods for interpretations of the information.&lt;br /&gt;
* maintaining practicability by means of flexibility and adaptability for handling variety of environments and sensor&amp;#039;s data.&lt;br /&gt;
&lt;br /&gt;
== Objectives ==&lt;br /&gt;
acquiring the skills of &amp;#039;&amp;#039;situation awareness&amp;#039;&amp;#039;, &amp;#039;&amp;#039;flexibility&amp;#039;&amp;#039; and &amp;#039;&amp;#039;adaptability&amp;#039;&amp;#039;, demands accomplishment in different disciplinary areas:&lt;br /&gt;
* &amp;#039;&amp;#039;Mapping and semantic annotation&amp;#039;&amp;#039;, both as a foundation of the semantic map for addressing articles and trucks in the environment, and to provide an automatic surveying and layout design for initial installation.&lt;br /&gt;
* &amp;#039;&amp;#039;Inventory list maintenance&amp;#039;&amp;#039;; a dynamic map maintenance approach in order to keep track of the inventory, linked with the warehouse management system.&lt;br /&gt;
* &amp;#039;&amp;#039;3D Perception&amp;#039;&amp;#039;; serving the objectives of obstacle avoidance and articles&amp;#039; quantity estimation for inventory list.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- {{ShowProjectPublications}} --&amp;gt;&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=2145</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=2145"/>
		<updated>2015-06-30T15:55:51Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Title=M.Sc&lt;br /&gt;
|Phone=+46-35-16-7537&lt;br /&gt;
|Cell Phone=+46-762868530&lt;br /&gt;
|Position=PhD Candidate&lt;br /&gt;
|Email=saesha@hh.se&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|Office=E522&lt;br /&gt;
|url=http://se.linkedin.com/pub/saeed-gholami-shahbandi/41/365/4b0/&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Semantics and Awareness&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
&amp;lt;!--{{PublicationsList}} --&amp;gt;&lt;br /&gt;
[[Category:Staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2144</id>
		<title>AIMS</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=AIMS&amp;diff=2144"/>
		<updated>2015-06-30T15:52:38Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{ResearchProjInfo&lt;br /&gt;
|Title=AIMS&lt;br /&gt;
|ContactInformation=Björn Åstrand&lt;br /&gt;
|ShortDescription=Automatic Inventory and Mapping of Stock&lt;br /&gt;
|Description=An important skill for future robots and automated guided vehicles (AGVs) is the ability to recognize and describe objects that the robot shall handle and the environment in which the robot operates. This is an important step towards making robots more intelligent (situation awareness). The ability to structure and sort information provided by sensors increases the system’s flexibility and ability to adapt to new settings (which in the end means lower costs). To do this with as few input parameters as possible is also a challenge. The goal with this project is to develop a system for automatic inventory and mapping of goods in a warehouse setting and associate these with articles in the warehouse management system. Objects are augmented into a semantic map and can be both goods and objects belonging to the warehouse infrastructure, e.g. pallet rack and pallets. The semantic map can be further expanded with objects belonging to the building (doors, columns, etc.) and mobile objects that appear in the path of the robot (pedestrians, forklift trucks, etc.). The project thus also includes building a concept map of the area of operation&lt;br /&gt;
&lt;br /&gt;
The project is a collaboration between the CAISR, Kollmorgen, Optronic, Toyota Material Handling Europe.&lt;br /&gt;
|LogotypeFile=Procedure.png&lt;br /&gt;
|ProjectResponsible=Björn Åstrand&lt;br /&gt;
|ProjectDetailsPDF=CAISR Poster 2013 AIMS.pdf&lt;br /&gt;
|FundingMSEK=12&lt;br /&gt;
|ProjectStart=2012/01/01&lt;br /&gt;
|ProjectEnd=2016/01/01&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
|Lctitle=No&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=KOLLMORGEN&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=OPTRONIC&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjPartner&lt;br /&gt;
|projectpartner=Toyota Material Handling Europe&lt;br /&gt;
}}&lt;br /&gt;
[[File:Aims semantic.png|thumb|caption|&amp;quot;AIMS&amp;quot;]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
__NOTOC__ &lt;br /&gt;
{{ShowResearchProject}}&lt;br /&gt;
{{ShowProjectPublications}}&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Pallet_Detection_and_Mapping&amp;diff=1866</id>
		<title>Pallet Detection and Mapping</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Pallet_Detection_and_Mapping&amp;diff=1866"/>
		<updated>2014-11-21T17:08:45Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Pallet Detection and Mapping in a Warehouse Environment&lt;br /&gt;
|Programme=Mobile and Autonomous Systems&lt;br /&gt;
|Keywords=Object Recognition, Classification, Mapping, Scene Understanding&lt;br /&gt;
|TimeFrame=January 2015 until June 2015,  with possible extension until September 2015.&lt;br /&gt;
|References=Belongie, Serge, Jitendra Malik, and Jan Puzicha. &amp;quot;Shape matching and object recognition using shape contexts.&amp;quot; Pattern Analysis and Machine Intelligence, IEEE Transactions on 24.4 (2002): 509-522.&lt;br /&gt;
&lt;br /&gt;
Viola, Paul, and Michael Jones. &amp;quot;Rapid object detection using a boosted cascade of simple features.&amp;quot; Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on. Vol. 1. IEEE, 2001.&lt;br /&gt;
&lt;br /&gt;
Lowe, David G. &amp;quot;Local feature view clustering for 3D object recognition.&amp;quot; Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on. Vol. 1. IEEE, 2001.&lt;br /&gt;
&lt;br /&gt;
Lowe, David G. &amp;quot;Distinctive image features from scale-invariant keypoints.&amp;quot; International journal of computer vision 60.2 (2004): 91-110.&lt;br /&gt;
&lt;br /&gt;
Bay, Herbert, et al. ”Speeded-up robust features (SURF).” Computer vision and image understanding 110.3 (2008): 346-359.&lt;br /&gt;
&lt;br /&gt;
Pinto, Nicolas, David D. Cox, and James J. DiCarlo. &amp;quot;Why is real-world visual object recognition hard?.&amp;quot; PLoS computational biology 4.1 (2008): e27.&lt;br /&gt;
|Prerequisites=Image analysis, programming skills (preferably C++ or Python), familiarity with pattern recognition, Familiarity with filtering techniques (eg. EKF) for mobile robots mapping.&lt;br /&gt;
|Supervisor=Björn Åstrand, Saeed Gholami Shahbandi,&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
concise description:&lt;br /&gt;
This project [as a subset of AIMS project], targets the automation of the logistic management system of a warehouse, by the means of automatic guided vehicles (AGVs, eg. lift trucks). Achieving this objective is feasible if the operating robots (AGVs) understand their surrounding through a high level model of the world. An essential element of this model is the inventory list. Locating stored articles and modulating the inventory list with respect to human’s expectation delivers a more effective system. A common element of the storage in warehouses is the pallet. By detecting and mapping the pallets in a warehouse, the problem of constructing the inventory list will be simplified to the problem of identification of those objects stored over each pallet.&lt;br /&gt;
&lt;br /&gt;
RQ: Detection of pallets falls into the category of object recognition. Although the pallet’s pattern is relatively simple and unique, yet the recognition process will be challenging. To reach a reliable result, one should deal with problems such as segmenting stacked pallets, cluttered environment, bad illumination and different view points.&lt;br /&gt;
&lt;br /&gt;
WP1: startup: literature review and data acquisition&lt;br /&gt;
WP2: pallet detection&lt;br /&gt;
WP3: pallet mapping 2D, [3D: bonus]&lt;br /&gt;
&lt;br /&gt;
Deliverable: an implementation and demonstration of the developed method for detection and mapping pallets, based on a dataset acquired in a real world warehouse.&lt;br /&gt;
[bonus] conference publication (ETFA, ECMR, TAROS).&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Pallet_Detection_and_Mapping&amp;diff=1865</id>
		<title>Pallet Detection and Mapping</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Pallet_Detection_and_Mapping&amp;diff=1865"/>
		<updated>2014-11-21T17:06:57Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Pallet Detection and Mapping in a Warehouse Environment&lt;br /&gt;
|Programme=Mobile and Autonomous Systems&lt;br /&gt;
|Keywords=Object Recognition, Classification, Mapping, Scene Understanding&lt;br /&gt;
|TimeFrame=January 2015 until June 2015,  with possible extension until September 2015.&lt;br /&gt;
|References=Belongie, Serge, Jitendra Malik, and Jan Puzicha. &amp;quot;Shape matching and object recognition using shape contexts.&amp;quot; Pattern Analysis and Machine Intelligence, IEEE Transactions on 24.4 (2002): 509-522.&lt;br /&gt;
&lt;br /&gt;
Viola, Paul, and Michael Jones. &amp;quot;Rapid object detection using a boosted cascade of simple features.&amp;quot; Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on. Vol. 1. IEEE, 2001.&lt;br /&gt;
&lt;br /&gt;
Lowe, David G. &amp;quot;Local feature view clustering for 3D object recognition.&amp;quot; Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on. Vol. 1. IEEE, 2001.&lt;br /&gt;
&lt;br /&gt;
Lowe, David G. &amp;quot;Distinctive image features from scale-invariant keypoints.&amp;quot; International journal of computer vision 60.2 (2004): 91-110.&lt;br /&gt;
&lt;br /&gt;
Bay, Herbert, et al. ”Speeded-up robust features (SURF).” Computer vision and image understanding 110.3 (2008): 346-359.&lt;br /&gt;
&lt;br /&gt;
Pinto, Nicolas, David D. Cox, and James J. DiCarlo. &amp;quot;Why is real-world visual object recognition hard?.&amp;quot; PLoS computational biology 4.1 (2008): e27.&lt;br /&gt;
|Prerequisites=Image analysis, programming skills (preferably C++ or Python), familiarity with pattern recognition, Familiarity with filtering techniques (eg. EKF) for mobile robots mapping.&lt;br /&gt;
|Supervisor=Björn Åstrand, Saeed Gholami Shahbandi, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
concise description:&lt;br /&gt;
This project [as a subset of AIMS project], targets the automation of the logistic management system of a warehouse, by the means of automatic guided vehicles (AGVs, eg. lift trucks). Achieving this objective is feasible if the operating robots (AGVs) understand their surrounding through a high level model of the world. An essential element of this model is the inventory list. Locating stored articles and modulating the inventory list with respect to human’s expectation delivers a more effective system. A common element of the storage in warehouses is the pallet. By detecting and mapping the pallets in a warehouse, the problem of constructing the inventory list will be simplified to the problem of identification of those objects stored over each pallet.&lt;br /&gt;
&lt;br /&gt;
RQ: Detection of pallets falls into the category of object recognition. Although the pallet’s pattern is relatively simple and unique, yet the recognition process will be challenging. To reach a reliable result, one should deal with problems such as segmenting stacked pallets, cluttered environment, bad illumination and different view points.&lt;br /&gt;
&lt;br /&gt;
WP1: startup: literature review and data acquisition&lt;br /&gt;
WP2: pallet detection&lt;br /&gt;
WP3: pallet mapping 2D, [3D: bonus]&lt;br /&gt;
&lt;br /&gt;
Deliverable: an implementation and demonstration of the developed method for detection and mapping pallets, based on a dataset acquired in a real world warehouse.&lt;br /&gt;
[bonus] conference publication (ETFA, ECMR, TAROS).&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Pallet_Detection_and_Mapping&amp;diff=1864</id>
		<title>Pallet Detection and Mapping</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Pallet_Detection_and_Mapping&amp;diff=1864"/>
		<updated>2014-11-21T17:06:11Z</updated>

		<summary type="html">&lt;p&gt;Saesha: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Pallet Detection and Mapping in a Warehouse Environment |Programme=Mobile and Autonomous Systems |Keywords=Object Recognition, Classification...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Pallet Detection and Mapping in a Warehouse Environment&lt;br /&gt;
|Programme=Mobile and Autonomous Systems&lt;br /&gt;
|Keywords=Object Recognition, Classification, Mapping, Scene Understanding&lt;br /&gt;
|TimeFrame=January 2015 until June 2015,  with possible extension until September 2015.&lt;br /&gt;
|References=Belongie, Serge, Jitendra Malik, and Jan Puzicha. &amp;quot;Shape matching and object recognition using shape contexts.&amp;quot; Pattern Analysis and Machine Intelligence, IEEE Transactions on 24.4 (2002): 509-522.&lt;br /&gt;
&lt;br /&gt;
Viola, Paul, and Michael Jones. &amp;quot;Rapid object detection using a boosted cascade of simple features.&amp;quot; Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on. Vol. 1. IEEE, 2001.&lt;br /&gt;
&lt;br /&gt;
Lowe, David G. &amp;quot;Local feature view clustering for 3D object recognition.&amp;quot; Computer Vision and Pattern Recognition, 2001. CVPR 2001. Proceedings of the 2001 IEEE Computer Society Conference on. Vol. 1. IEEE, 2001.&lt;br /&gt;
&lt;br /&gt;
Lowe, David G. &amp;quot;Distinctive image features from scale-invariant keypoints.&amp;quot; International journal of computer vision 60.2 (2004): 91-110.&lt;br /&gt;
&lt;br /&gt;
Bay, Herbert, et al. ”Speeded-up robust features (SURF).” Computer vision and image understanding 110.3 (2008): 346-359.&lt;br /&gt;
&lt;br /&gt;
Pinto, Nicolas, David D. Cox, and James J. DiCarlo. &amp;quot;Why is real-world visual object recognition hard?.&amp;quot; PLoS computational biology 4.1 (2008): e27.&lt;br /&gt;
|Prerequisites=Image analysis, programming skills (preferably C++ or Python), familiarity with pattern recognition, Familiarity with filtering techniques (eg. EKF) for mobile robots mapping.&lt;br /&gt;
|Supervisor=bj&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Dynamic_Objects_Detection_and_Tracking&amp;diff=1863</id>
		<title>Dynamic Objects Detection and Tracking</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Dynamic_Objects_Detection_and_Tracking&amp;diff=1863"/>
		<updated>2014-11-21T17:04:07Z</updated>

		<summary type="html">&lt;p&gt;Saesha: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Dynamic Objects Detection and Tracking in Warehouses, Using 3D Sensors. |Programme=Mobile and Autonomous Systems |Keywords=3D Sensor, Point C...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Dynamic Objects Detection and Tracking in Warehouses, Using 3D Sensors.&lt;br /&gt;
|Programme=Mobile and Autonomous Systems&lt;br /&gt;
|Keywords=3D Sensor, Point Cloud, Obstacle Detection, Obstacle Tracking, Obstacle Avoidance.&lt;br /&gt;
|TimeFrame=January 2015 until June 2015,  with possible extension until September 2015.&lt;br /&gt;
|References=Petrovskaya, Anna, and Sebastian Thrun. &amp;quot;Model based vehicle detection and tracking for autonomous urban driving.&amp;quot; Autonomous Robots 26.2-3 (2009): 123-139.&lt;br /&gt;
&lt;br /&gt;
Wojke, N.; Haselich, M., &amp;quot;Moving vehicle detection and tracking in unstructured environments,&amp;quot; Robotics and Automation (ICRA), 2012 IEEE International Conference on , vol., no., pp.3082,3087, 14-18 May 2012.&lt;br /&gt;
&lt;br /&gt;
Moras, J.; Cherfaoui, V.; Bonnifait, P., &amp;quot;A lidar perception scheme for intelligent vehicle navigation,&amp;quot; Control Automation Robotics &amp;amp; Vision (ICARCV), 2010 11th International Conference on , vol., no., pp.1809,1814, 7-10 Dec. 2010&lt;br /&gt;
&lt;br /&gt;
Golovinskiy, Aleksey, Vladimir G. Kim, and Thomas Funkhouser. &amp;quot;Shape-based recognition of 3D point clouds in urban environments.&amp;quot; Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009.&lt;br /&gt;
&lt;br /&gt;
Granstrom, K.; Lundquist, C.; Gustafsson, F.; Orguner, U., &amp;quot;Random Set Methods: Estimation of Multiple Extended Objects,&amp;quot; Robotics &amp;amp; Automation Magazine, IEEE , vol.21, no.2, pp.73,82, June 2014&lt;br /&gt;
&lt;br /&gt;
Data Association and Tracking a survey  RoboEarth.&lt;br /&gt;
&lt;br /&gt;
Rusu, Radu Bogdan, and Steve Cousins. &amp;quot;3d is here: Point cloud library (pcl).&amp;quot; Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011.&lt;br /&gt;
&lt;br /&gt;
Brostow, Gabriel J., et al. &amp;quot;Segmentation and recognition using structure from motion point clouds.&amp;quot; Computer Vision–ECCV 2008. Springer Berlin Heidelberg, 2008. 44-57.&lt;br /&gt;
&lt;br /&gt;
Drost, Bertram, et al. &amp;quot;Model globally, match locally: Efficient and robust 3D object recognition.&amp;quot; Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010.&lt;br /&gt;
&lt;br /&gt;
Biasotti, S. ; Falcidieno, B. ; Giorgi, D. ; Spagnuolo, M. “Mathematical Tools for Shape Analysis and Description”, 2014, Publisher :Morgan &amp;amp; Claypool, Edition:1, ISBN:1627053646&lt;br /&gt;
&lt;br /&gt;
Börcs, Attila, et al. &amp;quot;A Model-based Approach for Fast Vehicle Detection in Continuously Streamed Urban LIDAR Point Clouds.&amp;quot; (2014).&lt;br /&gt;
|Prerequisites=Familiarity with filtering techniques (eg. EKF) for mobile robots localization, Image analysis, programming skills (preferably C++ or Python).&lt;br /&gt;
|Supervisor=Björn Åstrand, Saeed Gholami Shahbandi, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
concise description: This project [as a subset of AIMS project], targets the automation of lift trucks in warehouse environments. Operating automatic guided vehicles in this particular environment is challenging due to the high expected throughput  and consequently high traffic. Lift trucks are heavy vehicles operating with relatively high speed in an environment where neither the trucks, nor the humans are well protected as a regular urban traffic. This calls for a extra security measure and cautious decisions. Collision avoidance is an essential skill for mobile robots to guarantee a safe operation in a workspace shared with humans. This project focuses on detection and tracking of dynamic objects in order to avoid collision.&lt;br /&gt;
&lt;br /&gt;
RQ: How to reliably detect, segment, and track dynamic objects (eg. humans and lift trucks) from a 3D point cloud, acquired by the means of a 3D sensor mounted on a mobile robot, in  highly structured environment (warehouse).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
WP1: startup: literature review and data acquisition&lt;br /&gt;
WP2: point cloud manipulation, object segmentation, scene understanding.&lt;br /&gt;
WP3: filtering and tracking.&lt;br /&gt;
WP4: [bonus] object recognition&lt;br /&gt;
Deliverable: an implementation and demonstration of the developed method for detection and tracking of the moving obstacle over the real data acquired in a real warehouse.&lt;br /&gt;
[bonus] conference publication (ETFA, ECMR, TAROS)&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Barcode_mapping_in_warehouses&amp;diff=1862</id>
		<title>Barcode mapping in warehouses</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Barcode_mapping_in_warehouses&amp;diff=1862"/>
		<updated>2014-11-21T17:00:19Z</updated>

		<summary type="html">&lt;p&gt;Saesha: Created page with &amp;quot;{{StudentProjectTemplate |Summary=Using barcode detection and decoding for mapping the infrastructure and inventory of warehouses |Programme=Mobile and Autonomous Systems |Key...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Using barcode detection and decoding for mapping the infrastructure and inventory of warehouses&lt;br /&gt;
|Programme=Mobile and Autonomous Systems&lt;br /&gt;
|Keywords=Barcode, Inventory, Mapping&lt;br /&gt;
|TimeFrame=January 2015 until June 2015,  with possible extension until September 2015.&lt;br /&gt;
|References=AIMS-project, http://islab.hh.se/mediawiki/AIMS&lt;br /&gt;
&lt;br /&gt;
ROS - Robot Operating System,  http://www.ros.org/&lt;br /&gt;
&lt;br /&gt;
ZBar bar code reader,  http://zbar.sourceforge.net/&lt;br /&gt;
&lt;br /&gt;
Stampfer, D.; Lutz, M.; Schlegel, C., &amp;quot;Information driven sensor placement for robust active object recognition based on multiple views,&amp;quot; Technologies for Practical Robot Applications (TePRA), 2012 IEEE International Conference on , vol., no., pp.133,138, 23-24 April 2012, doi: 10.1109/TePRA.2012.6215667&lt;br /&gt;
&lt;br /&gt;
Karpischek, S., Michahelles, F., Fleisch, E., “my2cents: enabling research on consumer-product interaction”, Pers Ubiquit Comput (2012) 16:613–622, DOI 10.1007/s00779-011-0426-9&lt;br /&gt;
&lt;br /&gt;
Han, Y., Sumi, Y., Matsumoto, Y., and And, N, “.Acquisition of Object Pose from Barcode for Robot Manipulation”, I. Noda et al. (Eds.): SIMPAR 2012, LNAI 7628, pp. 299–310, 2012.&lt;br /&gt;
&lt;br /&gt;
G Meng, S Darman, “Label and Barcode Detection in Wide Angle Image”, Master Thesis, Halmstad University, Sweden, http://urn.kb.se/resolve?urn=urn:nbn:se:hh:diva-23979&lt;br /&gt;
&lt;br /&gt;
|Prerequisites=Image analysis, programming skills (preferably C++ or Python).&lt;br /&gt;
|Supervisor=Björn Åstrand, Saeed Gholami Shahbandi, &lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Open&lt;br /&gt;
}}&lt;br /&gt;
This project [as a subset of AIMS project], targets the automation of forklift trucks in warehouse environments. The aim is to design a system for mapping of barcodes in a warehouse setting. Barcodes of interests are those located on pallet racks (beams) for identification of pallet rack cells and those located on individual boxes of different articles located on pallets. The goal is to generate a map (metric) of positions of barcodes found in the warehouse. The study also includes a comparison between a commercial barcode reader (from Cognex) and a custom built system based on a Gigabit camera (Prosilica) and a open source software for barcode detection (Zbar). Preferable the solutions are designed as ROS-packages.&lt;br /&gt;
&lt;br /&gt;
Resources: Facilities for data logging, cameras, barcode readers, laboratory equipped with a forklift truck for experiments, data logging equipment.&lt;br /&gt;
&lt;br /&gt;
RQ: Which system is best suited for bar code mapping in a warehouse? How can these systems be improved for faster and more accurate map building?&lt;br /&gt;
&lt;br /&gt;
WP1: Literature review and data acquisition.&lt;br /&gt;
WP2: Develop methods for barcode mapping using the different systems.&lt;br /&gt;
WP3: Comparison study and development of improvements of the different systems. &lt;br /&gt;
WP4: [bonus] conference publication (ETFA, ECMR, TAROS)&lt;br /&gt;
Deliverable: an implementation and demonstration of the developed system for bar-code mapping using data acquired in a real warehouse.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Simulating_Crowds_for_Traffic_Safety_Research&amp;diff=1797</id>
		<title>Simulating Crowds for Traffic Safety Research</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Simulating_Crowds_for_Traffic_Safety_Research&amp;diff=1797"/>
		<updated>2014-10-14T16:42:28Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Integrate crowd simulation into a mixed-reality platform for development and testing of advanced automotive safety systems.&lt;br /&gt;
|Keywords=Virtual world, swarm simulation&lt;br /&gt;
|References=http://gamma.cs.unc.edu/research/crowds/&lt;br /&gt;
http://www.coppeliarobotics.com/&lt;br /&gt;
|Prerequisites=Solid programming in Python, C, or C++; previous experience with ROS and mobile robotics would be a significant advantage&lt;br /&gt;
|Supervisor=Roland Philippsen&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
|Title=Simulating Crowds for Traffic Safety Research&lt;br /&gt;
}}&lt;br /&gt;
=== Project Description ===&lt;br /&gt;
&lt;br /&gt;
Many (if not most) safety-relevant elements of city driving involve vulnerable road users: pedestrians, bicycles, or kids playing in the street, to name just a few. Advanced safety systems for cars and trucks should take these into account, and there are high-profile research projects that specifically investigate how best to protect these vulnerable road users from accidents. One possibility is to incorporate people detection algorithms into cars, based on onboard sensors, and provide emergency avoidance maneuvers in case of imminent collisions.&lt;br /&gt;
&lt;br /&gt;
Such active safety systems would need extensive testing before being admitted in series production cars and trucks. But testing them is quite challenging: because of the severity of system failures, it is not possible to test them on real pedestrians. This raises the question of how to emulate or simulate pedestrians, for instance with puppets or in mixed reality settings. A large body of prior work exists in simulating crowds of humans for applications in computer games or the analysis of human movement through confined spaces such as subway stations. Some examples can be found on the website of the [http://gamma.cs.unc.edu/research/crowds/ GAMMA research group] at the University of North Carolina at Chapel Hill.&lt;br /&gt;
&lt;br /&gt;
In this project, the student will survey the state of the art in crowd simulation, with a focus on approaches and methods that can be used in mixed-reality settings to develop, evaluate, and test active safety systems in intelligent vehicles. Then, two or three promising algorithms will be implemented within our existing augmented reality robot platform, which then allows to simulate the sensor data of virtual vehicles that are confronted with these crowds of humans. The most appropriate crowd simulator will then be used to implement a handful of very specific test scenarios for active safety systems, to be further defined in collaboration with our industrial partner.&lt;br /&gt;
&lt;br /&gt;
==== Further References ====&lt;br /&gt;
&lt;br /&gt;
*[http://www.coppeliarobotics.com/ V-Rep] robot simulator&lt;br /&gt;
*Home page of [http://www.cs.utah.edu/%7Eberg/ Jur van den Berg]&lt;br /&gt;
*[http://en.wikipedia.org/wiki/Velocity_obstacle Velocity obstacle] concept&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Multi-robot_trajectory_adaptation_using_a_state-time_elastic_band_approach&amp;diff=1796</id>
		<title>Multi-robot trajectory adaptation using a state-time elastic band approach</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Multi-robot_trajectory_adaptation_using_a_state-time_elastic_band_approach&amp;diff=1796"/>
		<updated>2014-10-14T16:42:13Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=For teams of robots moving through dynamic environments, deform planned trajectories in space and time such that they remain collision-free and reach the goal (if possible) or report failure if replanning is needed.&lt;br /&gt;
|Programme=tbd&lt;br /&gt;
|Keywords=multi-robot planning, multi-robot coordination, obstacle avoidance, velocity-space obstacle&lt;br /&gt;
|TimeFrame=tbd&lt;br /&gt;
|References=tbd&lt;br /&gt;
|Prerequisites=recommended: intelligent vehicles, design of embedded intelligent systems, or similar; strong in math and programming&lt;br /&gt;
|Supervisor=Roland Philippsen, Jennifer David&lt;br /&gt;
|Examiner=Antanas Verikas&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
}}&lt;br /&gt;
Revisiting some research topics that appear to have calmed down with fresh ideas...&lt;br /&gt;
&lt;br /&gt;
http://robotics.unizar.es/data/documentos/dynamic-IROS05.pdf&lt;br /&gt;
http://ais.informatik.uni-freiburg.de/publications/papers/bennewitz01icra.pdf&lt;br /&gt;
http://homepages.laas.fr/nic/Papers/02itra.pdf&lt;br /&gt;
http://hal.archives-ouvertes.fr/docs/00/25/93/21/PDF/99-fraichard-rsjar.pdf&lt;br /&gt;
&lt;br /&gt;
...many others, no time to flesh it out right now.  Write me an email or drop by to see me and chat!&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Merging_Clothoids_with_B-Splines&amp;diff=1795</id>
		<title>Merging Clothoids with B-Splines</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Merging_Clothoids_with_B-Splines&amp;diff=1795"/>
		<updated>2014-10-14T16:41:53Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Develop an approach to create natural clothoidal lane-change maneuvers for automobiles on lanes that are specified using B-splines.&lt;br /&gt;
|Keywords=Parametric curves, computational geometry, vehicle planning and control&lt;br /&gt;
|References=http://en.wikipedia.org/wiki/B-spline&lt;br /&gt;
http://en.wikipedia.org/wiki/Clothoid&lt;br /&gt;
|Prerequisites=Solid programming skills in Python, Matlab, or similar scientific computing language; foundations in analytical geometry&lt;br /&gt;
|Supervisor=Roland Philippsen,&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
|Title=Merging Clothoidal Vehicle Trajectories onto B-Spline Paths&lt;br /&gt;
}}&lt;br /&gt;
=== Project Description ===&lt;br /&gt;
&lt;br /&gt;
[http://en.wikipedia.org/wiki/B-spline B-splines] are a very popular and expressive way to create smooth curves in computer graphics, industrial design, and other domains. They are also used in some vehicle control algorithms to specify a path to follow. However, splines are not the most natural way that cars and similar vehicles are actually driven around by humans. [http://en.wikipedia.org/wiki/Clothoid Clothoids] provide a much more appropriate formalism for this, because the curvature varies monotonically (linearly in fact). This is why highways are designed with clothoids (in combination with straight lines and circular arcs). A lot of work in path planning and control for vehicles thus relies on clothoids, creating a bit of a dichotomy when we want to leverage the geometrical power of splines with the kinodynamic smoothness of clothoids.&lt;br /&gt;
&lt;br /&gt;
One of the technical challenges that upcoming advanced safety systems for cars will need to face is the opportunity of using lane-change maneuvers to avoid dangerous situations. Similarly, in order to fulfill the dream of autonomous cars, such maneuvers will need to become commonplace and fully accepted by the human passengers. In this project, the candidate will survey previous work in path planning and control which uses either splines or clothoids, and also at planning methods which work in the space of control trajectories in order to produce work-space trajectories which natural smoothness properties. The candidate will then work on an approach to create a natural lane-change maneuver from one lane of traffic to another. The lanes will be specified using splines, and the vehicle motion will be specified using chunk-wise monotonically varying curvature, thus the relevance of clothoids.&lt;br /&gt;
&lt;br /&gt;
This work has a large theoretical proportion, from understanding prior work to formulating mathematical foundations to bridge the gap between two somewhat contradictory maneuver generation approaches. Programming will be limited to rapid prototyping in a high-level language, such as Matlab or Python with numpy/matplotlib.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Evaluation_of_Open_Source_Robot_Simulators_for_Smart_Mobility_Applications&amp;diff=1794</id>
		<title>Evaluation of Open Source Robot Simulators for Smart Mobility Applications</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Evaluation_of_Open_Source_Robot_Simulators_for_Smart_Mobility_Applications&amp;diff=1794"/>
		<updated>2014-10-14T16:41:43Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Can open source robot simulators serve as starting point for cloud services that support automotive R&amp;amp;D and V&amp;amp;V?&lt;br /&gt;
|References=http://www.gcdc.net/&lt;br /&gt;
http://en.wikipedia.org/wiki/Research_and_development&lt;br /&gt;
http://en.wikipedia.org/wiki/Verification_and_validation&lt;br /&gt;
http://www.robocup.org/&lt;br /&gt;
http://www.theroboticschallenge.org/aboutsimulator.aspx&lt;br /&gt;
http://www.coppeliarobotics.com/&lt;br /&gt;
http://gazebosim.org/&lt;br /&gt;
|Prerequisites=Solid programming experience, preferably in C++ under Linux.&lt;br /&gt;
|Supervisor=Roland Philippsen, Saeed Gholami Shahbandi, Christian Berger (Chalmers)&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
}}&lt;br /&gt;
The upcoming &amp;quot;iGAME&amp;quot; competition is a continuation of the “Grand Cooperative Driving Challenge” and will attract international teams to work on cooperative manoeuvres, such as merging onto highways, which can profit from vehicle communication and distributed decision making.  One of the challenges here is the coordinated and safe interplay of complex and heterogeneous systems from the various teams.  Significant challenges when deploying a communication protocol for such scenarios include the need to identify all relevant real-world factors and capture them in the simulation and verifying that the protocol holds for the targeted scenarios.  From a less technical point of view, a remaining open question is whether a common simulation platform, or even a custom cloud-based simulation service, could be used to lower the entrance barriers for participating in iGAME and making standardised tests inherent part of the vetting process before the real-world events.  In combination with miniature vehicles (e.g. 1/10 scale) such a simulation-based &amp;quot;entry league&amp;quot; (to take inspiration from Robocup and the DARPA Robotics Challenge) could significantly boost the participation in and impact of iGAME.&lt;br /&gt;
&lt;br /&gt;
The goal of this Master project is to survey approaches that leverage simulations in similar contexts and evaluate one or two open source simulators in view of using them for smart mobility application development.  In particular, the question of whether they can be deployed as a cloud service shall be investigated.  As a concrete project result, a prototypical implementation of such a service shall be targeted.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Acumen_Robot_Model_Series&amp;diff=1793</id>
		<title>Acumen Robot Model Series</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Acumen_Robot_Model_Series&amp;diff=1793"/>
		<updated>2014-10-14T16:41:29Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Build a series of increasingly sophisticated robot models in Acumen, to (1) explore mathematical formulations and (2) create tutorials and didactic examples.&lt;br /&gt;
|Keywords=Rigid-body dynamics, Acumen, Cyber-Physical System&lt;br /&gt;
|References=http://www.acumen-language.org/&lt;br /&gt;
http://en.wikipedia.org/wiki/SCARA SCARA&lt;br /&gt;
|Prerequisites=Solid mathematical and programming skills&lt;br /&gt;
|Supervisor=Roland Philippsen, Walid Taha&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
|Title=Acumen Robot Model Series&lt;br /&gt;
}}&lt;br /&gt;
=== Project Description ===&lt;br /&gt;
&lt;br /&gt;
Robots are machines that embody sensors, actuators, and computational resources. They are thus an excellent example of [http://en.wikipedia.org/wiki/Cyber-physical_system cyber-physical system] (CPS) and present very interesting twists for modeling and simulation tools. In particular, the equations of motion for robots that can be modeled as rigid-body trees are of a form that has been studied extensively &amp;amp;#091;[[#featherstone-2008|1]]&amp;amp;#093;.&lt;br /&gt;
&lt;br /&gt;
[http://www.acumen-language.org/ Acumen] is a domain specific language for modeling CPS. It is being developed by the [http://www.effective-modeling.org/ Effective Modeling group] to address a  key challenge  for accelerating innovation in this area. Simulation plays a key role  in CPS design, and Acumen  is a language for capturing and simulating the kind of hybrid continuous/discrete  models needed to  capture the behavior of cyber-physical systems. &lt;br /&gt;
&lt;br /&gt;
The objective of this masters&amp;#039; thesis is to build up a series of increasingly sophisticated robot manipulator examples in Acumen. The aim is twofold: (1) explore which kinds of mathematical formulations need to be efficiently supported by Acumen in order to best support the robot design and prototyping process, and (2) create tutorials and didactic examples for teaching Acumen in particular, and CPS in general.&lt;br /&gt;
&lt;br /&gt;
A preliminary idea for the sequence of robot examples is as follows.&lt;br /&gt;
#Point masses in a plane &amp;#039;&amp;#039;(serves double duty as intro to Acumen)&amp;#039;&amp;#039;&lt;br /&gt;
#*kinematic chains (pendulum, double pendulum, ...)&lt;br /&gt;
#*kinematic trees (stick figures)&lt;br /&gt;
#Inertias in a plane&lt;br /&gt;
#*kinematic chains&lt;br /&gt;
#*kinematic trees&lt;br /&gt;
#*real-world example: [http://en.wikipedia.org/wiki/SCARA SCARA] robot&lt;br /&gt;
#Inertias in three dimensions&lt;br /&gt;
#*study [http://en.wikipedia.org/wiki/Denavit%E2%80%93Hartenberg_parameters Denavit-Hartenberg] (DH) parameters as a preparation for the following steps&lt;br /&gt;
#*repeat the first four examples in 3D space with DH parameters&lt;br /&gt;
#*model a [http://en.wikipedia.org/wiki/Programmable_Universal_Machine_for_Assembly PUMA] arm, a very nice example of kinematic chain, with known kinematic and dynamic parameters&lt;br /&gt;
#*model a [http://www.willowgarage.com/pages/pr2/overview PR2] mobile manipulator, a very nice example of kinematic tree, also with known parameters&lt;br /&gt;
&lt;br /&gt;
==== References ====&lt;br /&gt;
&amp;lt;div id=&amp;quot;featherstone-2008&amp;quot;&amp;gt;&amp;lt;/div&amp;gt;[1] R. Featherstone. Rigid Body Dynamics Algorithms. Springer, New York, 2008. ISBN 0387743146.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Obstacle_Identification_from_3D_Data_for_AGVs_in_a_Warehouse_Environment&amp;diff=1789</id>
		<title>Obstacle Identification from 3D Data for AGVs in a Warehouse Environment</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Obstacle_Identification_from_3D_Data_for_AGVs_in_a_Warehouse_Environment&amp;diff=1789"/>
		<updated>2014-10-14T10:29:57Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Obstacle Identification from 3D Data for AGVs in a Warehouse Environment&lt;br /&gt;
|Programme=Embedded and Intelligent Systems (120 credits), Information Technology (120 credits)&lt;br /&gt;
|Keywords=3D point cloud, time of flight camera, obstacle detection, segmentation, object recognition, mobile robot&lt;br /&gt;
|TimeFrame=Start: February 2014, End: June 2014&lt;br /&gt;
|References=Zhang, Hao, et al. &amp;quot;SVM-KNN: Discriminative nearest neighbor classification for visual category recognition.&amp;quot; Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. Vol. 2. IEEE, 2006.&lt;br /&gt;
&lt;br /&gt;
Golovinskiy, Aleksey, Vladimir G. Kim, and Thomas Funkhouser. &amp;quot;Shape-based recognition of 3D point clouds in urban environments.&amp;quot; Computer Vision, 2009 IEEE 12th International Conference on. IEEE, 2009.&lt;br /&gt;
&lt;br /&gt;
Rusu, Radu Bogdan, and Steve Cousins. &amp;quot;3d is here: Point cloud library (pcl).&amp;quot; Robotics and Automation (ICRA), 2011 IEEE International Conference on. IEEE, 2011.&lt;br /&gt;
&lt;br /&gt;
Nüchter, Andreas, and Joachim Hertzberg. &amp;quot;Towards semantic maps for mobile robots.&amp;quot; Robotics and Autonomous Systems 56.11 (2008): 915-926.&lt;br /&gt;
&lt;br /&gt;
Lai, Kevin, and Dieter Fox. &amp;quot;Object recognition in 3D point clouds using web data and domain adaptation.&amp;quot; The International Journal of Robotics Research 29.8 (2010): 1019-1037.&lt;br /&gt;
&lt;br /&gt;
Brostow, Gabriel J., et al. &amp;quot;Segmentation and recognition using structure from motion point clouds.&amp;quot; Computer Vision–ECCV 2008. Springer Berlin Heidelberg, 2008. 44-57.&lt;br /&gt;
&lt;br /&gt;
Rusu, Radu Bogdan, et al. &amp;quot;Fast 3d recognition and pose using the viewpoint feature histogram.&amp;quot; Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on. IEEE, 2010.&lt;br /&gt;
&lt;br /&gt;
Drost, Bertram, et al. &amp;quot;Model globally, match locally: Efficient and robust 3D object recognition.&amp;quot; Computer Vision and Pattern Recognition (CVPR), 2010 IEEE Conference on. IEEE, 2010.&lt;br /&gt;
|Prerequisites=Image analysis, machine learning, programming skill, ROS and PLC&lt;br /&gt;
|Supervisor=Björn Åstrand, Saeed Gholami Shahbandi,&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
}}&lt;br /&gt;
A very essential element to achieve the proper solution of the intelligent warehouses, is AGVs with smart behavior. One criteria of a smart behavior is the way vehicles handle obstacle encountering. The goal of this project is to use a 3D sensor (Fotonic P70, a time of flight camera) to detect and identify the obstacles appearing in the path of AGV (lift-trucks) in warehouses.&lt;br /&gt;
Research Question: while the current solution to obstacle avoidance for lift-trucks in the work environment involves a set of 2D range sensors and obstacle detection, desired result of this project is to develop  a method for obstacle identification by mean of a 3D sensors, in order to increase “situation awareness” of AGVs and behave more intelligently.&lt;br /&gt;
&lt;br /&gt;
Work package 1: 3D point cloud manipulation (system setup)&lt;br /&gt;
Work package 2: object detection (segmentation)&lt;br /&gt;
Work package 3: identity recognition of obstacles (classification)&lt;br /&gt;
Work package 4: estimating the motion of obstacles from a sequence of frames (bonus part)&lt;br /&gt;
&lt;br /&gt;
Deliverable: an implementation and demonstration of a developed method for obstacle identification.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Semantic_Analysis_of_2D_Maps_With_a_Metric-Topological_Approach&amp;diff=1788</id>
		<title>Semantic Analysis of 2D Maps With a Metric-Topological Approach</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Semantic_Analysis_of_2D_Maps_With_a_Metric-Topological_Approach&amp;diff=1788"/>
		<updated>2014-10-14T10:29:44Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{StudentProjectTemplate&lt;br /&gt;
|Summary=Semantic Analysis of 2D Maps With a Metric-Topological Approach.&lt;br /&gt;
|Programme=Embedded and Intelligent Systems (120 credits),  Information Technology (120 credits)&lt;br /&gt;
|Keywords=semantic labeling, place labeling, segmentation, pattern recognition, classification, 2D maps&lt;br /&gt;
|TimeFrame=Start: February 2014, End: June 2014&lt;br /&gt;
|References=Rottmann, Axel, et al. &amp;quot;Semantic place classification of indoor environments with mobile robots using boosting.&amp;quot; AAAI. Vol. 5. 2005.&lt;br /&gt;
&lt;br /&gt;
Liu, Ziyuan, and Georg von Wichert. &amp;quot;Extracting semantic indoor maps from occupancy grids.&amp;quot; Robotics and Autonomous Systems (2013).&lt;br /&gt;
&lt;br /&gt;
Schroter, Derik, Michael Beetz, and J-S. Gutmann. &amp;quot;Rg mapping: Learning compact and structured 2d line maps of indoor environments.&amp;quot; Robot and Human Interactive Communication, 2002. Proceedings. 11th IEEE International Workshop on. IEEE, 2002.&lt;br /&gt;
|Prerequisites=Image analysis, programming skill, machine learning&lt;br /&gt;
|Supervisor=Björn Åstrand, Saeed Gholami Shahbandi,&lt;br /&gt;
|Level=Master&lt;br /&gt;
|Status=Internal Draft&lt;br /&gt;
}}&lt;br /&gt;
A fundamental ingredient for semantic labeling is a reliable method for determining relevant spatial features of an environment and a proper way of describing the context through pattern classification. Adaptive Grid detects arbitrary dominant orientations in, fits corresponding line features with tunable resolution, and extracts topological information. This method is developed for the case of a maps with straight lines and occupancy informations. The goal of this project is to generalize the method for arbitrary feature fitting and develop a pattern classification for a more general map (like an aerial image of an structured area) other than just occupancy map.&lt;br /&gt;
Research Question: How to employ pattern classification as region descriptor and developing a general scheme for semantic labeling.&lt;br /&gt;
Work package P1: generalization of feature fitting, both in nature and shape. Features to be fitted and representing the structure of the environment do not have to be necessarily walls and straight light.&lt;br /&gt;
Work package 2: analysis of the cells represented by Adaptive Grid (pattern classification)&lt;br /&gt;
Work package 3: semantic label development&lt;br /&gt;
Deliverable: an implementation and demonstration of the developed method for a general semantic analysis based on Adaptive Grid. Result of the method shall be demonstrated on other types of maps/images other than occupancy grid maps.&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=1771</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=1771"/>
		<updated>2014-08-29T07:12:53Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Title=M.Sc&lt;br /&gt;
|Phone=+46-35-16-7537&lt;br /&gt;
|Position=PhD Candidate&lt;br /&gt;
|Email=saesha@hh.se&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|Office=E522&lt;br /&gt;
|url=http://se.linkedin.com/pub/saeed-gholami-shahbandi/41/365/4b0/&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
&amp;lt;!--{{PublicationsList}} --&amp;gt;&lt;br /&gt;
[[Category:Staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
	<entry>
		<id>https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=1770</id>
		<title>Saeed Gholami Shahbandi</title>
		<link rel="alternate" type="text/html" href="https://mw.hh.se/caisr/index.php?title=Saeed_Gholami_Shahbandi&amp;diff=1770"/>
		<updated>2014-08-29T07:10:09Z</updated>

		<summary type="html">&lt;p&gt;Saesha: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Person&lt;br /&gt;
|Family Name=Gholami Shahbandi&lt;br /&gt;
|Given Name=Saeed&lt;br /&gt;
|Title=M.Sc&lt;br /&gt;
|Phone=+46-35-16-7537&lt;br /&gt;
|Position=PhD Candidate&lt;br /&gt;
|Email=saesha@hh.se&lt;br /&gt;
|Image=Saaed_small.jpg‎&lt;br /&gt;
|Office=E522&lt;br /&gt;
|Subject=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignProjects&lt;br /&gt;
|project=AIMS&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Robotics&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Computer Vision&lt;br /&gt;
}}&lt;br /&gt;
{{AssignSubjectAreas&lt;br /&gt;
|SubjectArea=Machine Learning&lt;br /&gt;
}}&lt;br /&gt;
{{AssignApplicationAreas&lt;br /&gt;
|ApplicationArea=Intelligent Vehicles&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--Remove or add comments --&amp;gt;&lt;br /&gt;
{{ShowPerson}}&lt;br /&gt;
{{InsertSubjAreas}}&lt;br /&gt;
{{InsertProjects}}&lt;br /&gt;
{{PublicationsList}}&lt;br /&gt;
&amp;lt;!--{{PublicationsList}} --&amp;gt;&lt;br /&gt;
[[Category:Staff]]&lt;/div&gt;</summary>
		<author><name>Saesha</name></author>
	</entry>
</feed>