Teams will meet with the instructor and conduct a simulated Client Meeting in class. Teams will present only to the instructor, and are required to only attend their time slot. A pre-meeting memo is due before the meeting and a post-meeting memo is due after the meeting.
| Deliverable | Due Date | Canvas Submission Portal | 
|---|---|---|
| Pre-meeting Memo | 10/27 (11:59PM) | Upload to Canvas (one submission per team) | 
| Simulated Meeting | 10/28 (in class) | No Canvas Submission Required | 
| Post-meeting Memo | 10/29 (11:59PM) | Upload to Canvas (one submission per team) | 
Further details are provided below for each required deliverable.
                                    Required deliverable: Following the in-class Helix analysis and prior to the simulated meeting, document the following in a 1/2 to 1 page memo addressed to Bill Titera, titled "Pre-Interview Update on PSU Audit": 
                                    
                                    Required deliverable: Following the Audit Analytics class, each group will meet with the instructor individually to simulate a meeting with the client to follow-up on issues raised during the audit. Students will play the audit team assigned to undertake an initial control walkthrough meeting with the client. Consider the following key elements of the meeting: 
                                    
                                    Required deliverable: Following the client interview on Oct 29, document the following in a 1/2 to 1 page memo addressed to Bill Titera, titled "Post-Interview Update on PSU Audit": 
                                    
This policy outlines expectations for the responsible and ethical use of generative AI technologies, including large language models (LLMs) such as ChatGPT, in this course. These tools can significantly enhance learning, productivity, and creativity–but must be used transparently and professionally to support a respectful and effective learning environment.
Generative AI may be used to assist with idea generation, research, document drafting, programming, editing, and other academic work, provided the output is critically reviewed, refined, and understood by the student or team. Use of AI is encouraged when it enhances the learning process.
Students are responsible for the accuracy, relevance, and integrity of any work submitted, including content influenced or generated by AI tools. Errors introduced by generative AI–factual, analytical, or interpretive–will be treated as student errors and may result in reduced grades.
Students may be asked to disclose when and how they used generative AI tools in individual or team assignments. In cases where the use of AI significantly contributes to the submission (e.g., coding assistance, text drafting), students should include a brief statement describing the use.
Submitting AI–generated content without understanding it, using AI to bypass individual learning (e.g., for comprehension–based quizzes or in–class polls), or allowing AI to make up sources or misrepresent work is a violation of course expectations and academic integrity.
This policy may be updated as the role of AI in education continues to evolve.