A team project that focuses on an enterprise level automation robot aimed at solving a reporting problem. The bot can be demonstrated in a proof-of-concept state which includes being presented on a single desktop, not relying upon live data inputs, not being 100% completed at the enterprise-level, and producing rudimentary dashboards or other outputs. The solution can be leveraged and used in the Common Final Project, where a technology assisted ESG disclosure task is discussed. Teams should consider how the solution can be considered as "Intelligent Automation" by applying Generative Artificial Intelligence and/or using Agentic AI / teams of AI Agents. The presentation of the automation solution will be focused on the technical goals (including clearly articulating the process using a process diagram) and limitations of the bot. Teams are recommended to discuss their solution from their instructor as early as possible for feedback on feasibility and expectations. Details and further guidance will be provided on Canvas.
Deliverable | Due Date | Canvas Submission Portal |
---|---|---|
Intelligent Automation Team Challenge (Team, 35%, Presented in Classes 17&18, materials due May 26th) | May 26th, 2025 | Upload to Canvas (one submission per team) |
Further details are provided below for each required deliverable.
Required deliverable: A software submission demonstrating an enterprise-oriented automation that applies at least one form of intelligent automation, including either Generative AI or Agentic AI principles.
Required deliverable: A short presentation explaining the problem, your automation architecture, the intelligent features included, and limitations or next steps.
This policy outlines expectations for the responsible and ethical use of generative AI technologies, including large language models (LLMs) such as ChatGPT, in this course. These tools can significantly enhance learning, productivity, and creativity–but must be used transparently and professionally to support a respectful and effective learning environment.
Generative AI may be used to assist with idea generation, research, document drafting, programming, editing, and other academic work, provided the output is critically reviewed, refined, and understood by the student or team. Use of AI is encouraged when it enhances the learning process.
Students are responsible for the accuracy, relevance, and integrity of any work submitted, including content influenced or generated by AI tools. Errors introduced by generative AI–factual, analytical, or interpretive–will be treated as student errors and may result in reduced grades.
Students may be asked to disclose when and how they used generative AI tools in individual or team assignments. In cases where the use of AI significantly contributes to the submission (e.g., coding assistance, text drafting), students should include a brief statement describing the use.
Submitting AI–generated content without understanding it, using AI to bypass individual learning (e.g., for comprehension–based quizzes or in–class polls), or allowing AI to make up sources or misrepresent work is a violation of course expectations and academic integrity.
This policy may be updated as the role of AI in education continues to evolve.