ACCTG 528 Assessment

Intelligent Automation Team Challenge

Team Deliverable (35%)

Assessment Overview

A team project that focuses on an enterprise level automation robot aimed at solving a reporting problem. The bot can be demonstrated in a proof-of-concept state which includes being presented on a single desktop, not relying upon live data inputs, not being 100% completed at the enterprise-level, and producing rudimentary dashboards or other outputs. The solution can be leveraged and used in the Common Final Project, where a technology assisted ESG disclosure task is discussed. Teams should consider how the solution can be considered as "Intelligent Automation" by applying Generative Artificial Intelligence and/or using Agentic AI / teams of AI Agents. The presentation of the automation solution will be focused on the technical goals (including clearly articulating the process using a process diagram) and limitations of the bot. Teams are recommended to discuss their solution from their instructor as early as possible for feedback on feasibility and expectations. Details and further guidance will be provided on Canvas.

Required Deliverables

Deliverable Due Date Canvas Submission Portal
Intelligent Automation Team Challenge (Team, 35%, Presented in Classes 17&18, materials due May 26th) May 26th, 2025 Upload to Canvas (one submission per team)

Deliverable Details and Hints

Further details are provided below for each required deliverable.

Required deliverable: A software submission demonstrating an enterprise-oriented automation that applies at least one form of intelligent automation, including either Generative AI or Agentic AI principles.

  • The automation should simulate or address a real enterprise-level reporting problem. This can include financial reporting, compliance monitoring, ESG-related disclosures, or operational dashboards.
  • The automation must be submitted as a working prototype, capable of running on a single desktop. It is not expected to be connected to live enterprise systems but should demonstrate the technical logic and structure of the end solution.
  • Teams must document which elements involve intelligent automation, especially where GenAI or agentic workflows (e.g., AI planning or multi-agent cooperation) are used.

  • Think about enterprise use cases such as reconciliations, report generation, risk alerts, or document processing that could be enhanced using GenAI or agentic logic.
  • Agentic automation might involve a planning agent, a retrieval agent, and a reporting agent working in coordination—don’t worry about fully deploying this, just simulate the idea.
  • If your automation uses an LLM, be clear on how it is used (e.g., summarizing, classifying, formatting). Include prompt examples or API logic in your documentation.

Required deliverable: A short presentation explaining the problem, your automation architecture, the intelligent features included, and limitations or next steps.

  • The presentation should include a clear process diagram showing the current manual process and how the automation addresses each step.
  • Present the automation goal, inputs, outputs, and how intelligent elements (e.g., GenAI, agentic coordination, contextual adaptation) are integrated.
  • Discuss limitations of your solution (e.g., data availability, LLM stability, scalability) and identify which parts are working, simulated, or proposed.

  • Use screenshots or short demo clips to walk through your bot in action—don’t rely solely on slides.
  • Keep your presentation focused on the process improvement and the intelligent automation angle—be honest about what works and what is still conceptual.
  • Use the process diagram to anchor your audience—it helps clarify technical flow and supports discussion of future scalability or generalization.

Generative AI Policy

This policy outlines expectations for the responsible and ethical use of generative AI technologies, including large language models (LLMs) such as ChatGPT, in this course. These tools can significantly enhance learning, productivity, and creativity–but must be used transparently and professionally to support a respectful and effective learning environment.

Permitted Use:

Generative AI may be used to assist with idea generation, research, document drafting, programming, editing, and other academic work, provided the output is critically reviewed, refined, and understood by the student or team. Use of AI is encouraged when it enhances the learning process.

Student Responsibility:

Students are responsible for the accuracy, relevance, and integrity of any work submitted, including content influenced or generated by AI tools. Errors introduced by generative AI–factual, analytical, or interpretive–will be treated as student errors and may result in reduced grades.

Disclosure & Ethics:

Students may be asked to disclose when and how they used generative AI tools in individual or team assignments. In cases where the use of AI significantly contributes to the submission (e.g., coding assistance, text drafting), students should include a brief statement describing the use.

Unacceptable Use:

Submitting AI–generated content without understanding it, using AI to bypass individual learning (e.g., for comprehension–based quizzes or in–class polls), or allowing AI to make up sources or misrepresent work is a violation of course expectations and academic integrity.

This policy may be updated as the role of AI in education continues to evolve.