Back to Top
Skip to Site Search Skip to Utility Nav Skip to Top Nav Skip to Left Nav Skip to Content
Close Main Menu

Accreditation and Reporting

The University of North Georgia’s College of Education is fully accredited by the National Council for Accreditation of Teacher Education (NCATE). In July 2013, the consolidation of NCATE and the Teacher Education Accreditation Council (TEAC) resulted in the creation of the Council for the Accreditation of Educator Preparation (CAEP) – a new, national accreditor for education preparation programs.

UNG’s Educator Preparation Provider (EPP) is a College of Education unit consisting of both undergraduate and graduate teacher preparation programs that collaborate on the design, delivery, approval and accreditation of all programs.

1. Impact on P-12 Learning and Development (Component 4.1)

Measure 1. Impact on P-12 Learning and Development (Component 4.1)

The provider documents multiple growth measures that completers contribute to student learning.  

The University of North Georgia’s Educator Preparation Provider (EPP) uses several measures to demonstrate “the impact of its completers on P-12 student learning and development, classroom instruction, and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation” (CAEP Standard 4). In terms of 4.1, the goals are to document our completers’ impact on student learning and development.

As an initial piece of evidence provided here to meet these aspects of the standard, we have data from surveys developed by the Georgia Professional Standards Commission (GaPSC) provided to inductee or first-year teachers and the employers of these educators (see “Induction Teacher and Leader Surveys and Employer Surveys” in the supporting evidence below). The GaPSC sends a survey to each EPP’s alumni and their employers one year after they have completed their initial certification and have been employed in a public P-12 school in Georgia (E45). These surveys are based on InTASC standards, and they measure perceived comfort/skill regarding meeting these standards in the classroom (the questions have been provided in the evidence along with the data). These surveys are an indirect measure of completers’ impact on learning and development, classroom instruction, and overall effectiveness in that direct student data is not included in these surveys. They are a subjective measure of how completers measure, assess, and reflect on their own skills and their perceived impact on their classrooms and students and how their employers view the completers’ impact on and overall effectiveness in regard to classrooms and students.

The results from these surveys, though, do provide us a means to triangulate data when we see trends and patterns across surveys and then via additional measures. This affords us a way to utilize multiple measures to determine where trends exist. While analysis of these surveys has been provided in the supporting evidence below, patterns that emerged from the surveys indicated that these first-year educators need additional training in the differentiation of instruction and classroom management. Moreover, the need for an increased focus on literacy was noted as a pattern. These three main areas were seen in employer and completer surveys, and they were emphasized via other data measures, which will be discussed below. Overall, the results indicated that inductees were confident in their skills and approaches to teaching, instruction, and assessment, along with their use of technology and their understandings of diversity. They also noted that they felt confident in terms of their interactions and professional behaviors as educators.

As another and perhaps more significant measure, we received Teacher Keys Effectiveness System (TKES) data from the Georgia Department of Education (GaDOE) on completers’ Teacher Effectiveness Measure (TEM) Scores, Teacher Assessment Performance System (TAPS) scores, Professional Growth ratings, and student growth measures, including Student Growth Percentile (SGP) scores, Student Growth Ratings (SGR), and non-Student Growth Ratings (non-SGR) (see supporting evidence below). To explain this evaluation system, in the state of Georgia, the Teacher Keys Effectiveness System (TKES), which is used as a performance system for all public-school educators, “is comprised of three components which contribute to an overall Teacher Effectiveness Measure (TEM): Teacher Assessment on Performance Standards (TAPS), Professional Growth, and Student Growth” (GaDOE, Georgia’s Teacher Keys Effectiveness System: Implementation Handbook, p. 5). As for student growth, this is based on Student Learning Objectives (SLOs), which are “the measure of student growth for non-state-tested subjects. The aggregate measure of SLO performance for all non-tested courses taught by a teacher will be used in calculating his or her TEM” (GaDOE, TEM Scoring Guide and Methodology). Additionally, these scores include Student Growth Percentiles (SGPs), which are “the measure of student growth for core state-tested subjects” (GaDOE, TEM Scoring Guide and Methodology).

The GaDOE’s supplemental descriptions for determining these scores are included as evidence here as well. In terms of the TAPS data, we received a separate breakdown of first-year teachers’ scores by each of the ten standards on which they are evaluated, including the following: (1) Professional Knowledge, (2) Instructional Planning, (3) Instructional Strategies, (4) Differentiated Instruction, (5) Assessment Strategies, (6) Assessment Uses, (7) Positive Learning Environment, (8) Academically Challenging Environment, (9) Professionalism, and (10) Communication. Supervisors evaluate teachers each year on all ten tap standards, and this evaluation includes walk-throughs, formal observations, and documentation/artifacts provided to supervisors over the course of the year. For our evidence here, we have included the TKES data in several forms including the following: (1) a three-year aggregate for the Educator Preparation Provider’s (EPP) initial programs as a whole, (2) a breakdown by year for three cycles for the EPP’s initial programs as a whole, and (3) a breakdown by year for three cycles by the program (E66).

This data, which is included as an evidence piece for Standard 4, demonstrates that the overwhelming majority of our completers for the past three years have scored at a level 3 (proficient) out of 4 on all aspects, which is where we hope they will be as induction-level, or new, teachers. For the overall TEM scores, the majority scored at a level 3, and for the SGPs, the majority scored at a level 3, with 11.71 % scoring at a level 2. With student growth ratings (SGR), the majority scored at a level 3, with 8.40 scoring at a level 2, and with non-Student Growth Percentiles, the majority was at a level 3, with 6.34 scoring at a level 2. In terms of TAPS data, again, the majority scored at a level 3, with a small percentage scoring at a level 2 for Academically Challenging Environment, Positive Learning Environment, Instructional Strategies, and Differentiated Instruction. While these percentages scoring at a level 2 were small (5.23, 3.03, 3.14, and 3.77, respectively), we will keep these areas in mind as we continue to revise courses, key assessments, and field and clinical placement requirements to ensure that our completers are as prepared to have the highest possible positive impact on the P-12 students with whom they work. With regard to TAPS, while the majority scored at a level 3, as noted above, 20.92 % scored a level 4 on Academically Challenging Environments, and 19.35 scored a level 4 on Professionalism. For induction-level teachers, they surpassed expectations by achieving the highest possible scores in these areas. When looking across programs, the Elementary and Special Education program showed the most variation in scoring, with more induction teachers scoring below a 3; however, the majority still scored at a level 3. This variation is not surprising considering that this program produces the most alumni, and thus there is room for more variation amongst inductee teachers. The TAPS data are an excellent indicator of both impact on students, as they directly address planning, teaching, and assessment, and overall effectiveness, as they cover a wide range of overall responsibilities in regard to the role of a teacher, from student impact to professionalism.

We chose to supplement and triangulate this data with a case study to demonstrate impact and effectiveness, for which we interviewed completers across all programs (37 inductee teachers/completers, 14 advanced completers), their employers (37 administrators), and veteran teachers who worked with inductees as mentors in their first or induction-level year (9 mentor teachers). The entire case study, including methodology, data, analysis, and interview protocols/questions, has been included here as evidence, but we will address some highlights here (see supporting evidence below). There were multiple codes that emerged from the interviews, including the following: (1) relationship building, (2) academic growth, (3) classroom management, (4) collaboration, (5) communication, (6) content knowledge, (7) data and assessment, (8) differentiation, (9) diversity, (10) field placement experience, (11) pedagogy, (12) professional dispositions, (13) social growth of P-12 students, and (14) technology. For each of these areas, the case study highlights sub-codes from inductee teachers, administrators, and veteran teachers acting as mentors. For each sub-code, the case study covers strengths, areas for improvement, suggestions, and overall implications/summaries. In all areas, the feedback was overwhelmingly positive regarding inductees’ skills in terms of pedagogy and their overall impact and effectiveness on their P-12 students. As mirrored in the employer and inductee surveys, classroom management, differentiation, and literacy skills emerged as areas in which inductees could use additional professional development and/or could have used additional or enhanced preparation while in their initial programs. Administrators noted, however, that these are all areas in which they see a need for improvement with veteran teachers as well, and they were not specific to completers from our EPP. In fact, literacy has become a state initiative, as this is a recognized need across the board.  As one administrator noted, “I mean, I feel like there’s not one university out there that actually teaches reading the way it’s supposed to be taught. I mean, just from my own personal experience.” With this in mind, we are developing a literacy pilot project as part of a state initiative to integrate additional reading content and/or courses earlier within our programs. Also, we have had one speaker provide professional development as related to dyslexia and techniques for early literacy and reading comprehension, and we are currently working on embedding content related to reading, comprehension, and dyslexia into our programs. Additionally, based on these interviews, we plan to add additional information to our programs regarding the development of Individualized Education Plans (IEPs) and behavior management plans. While all students receive this information, these were other areas in which the need for increased emphasis was noted. Our graduates were praised for their ability to build relationships with students, parents, community members, and colleagues, and their abilities to engage students in the classroom were highlighted throughout the interviews. As one administrator noted in regard to student engagement, “I think North Georgia is giving them the skills to do that and not line up 25 desks in a row and sit and teach in the front. ... They realize you’ve got to keep the kids engaged.” Interviews also highlighted completers’ use of technology as a teaching tool, as a data tool, and as a tool for inciting critical thinking. As one administrator stated of an inductee teacher, “I remember going in there, going, ‘Wow. It’s amazing!’ She uses technology not just as tool to make sure students are proficient with skills, but she is guiding them to create their own stuff. She has them doing electronic portfolios to share. It was just amazing to watch.” In looking at from inductees across programs, there were no major differences in terms of skills across the areas listed above, but again, the majority of comments for all areas were positive. In fact, most of the areas for improvement seemed to be geared to one particular inductee rather than emerging as patterns.

We are also including Georgia's Teacher Preparation Program Effectiveness Measures (PPEMs) in support of component 4.1. Recently, the Georgia Professional Standards Commission (GaPSC) began publishing scores for Educator Preparation Providers (EPPs) across the state. The EPP as a whole is scored, as is each program (programs are combined for the aggregate score). The PPEM scores are based on in-program measures (50 percent), including the edTPA scores (30 percent) and the Georgia Assessments for the Certification of Educators (GACE) content scores (20 percent), and the other 50 percent is based on outcome measures that come into play once completers are in their first year of teaching. These outcome measures stem from the EPP completers' TAPS scores (30%), the GaPSC employer survey (10%), and the GaPSC inductee/first-year teacher survey (10%), all of which have been mentioned previously in this narrative. These five measures "generate one of four possible overall ratings for a provider or program. Level 3 is the expected standard of performance for programs and providers in the state, with about three in four providers earning this rating in prior years for which the calculation was modeled" (Georgia's TPPEM_Academic Year 2018-2019, GaPSC). The scores are calculated utilizing an aggregate of three years of data for all programs (Georgia's TPPEM_Academic Year 2018-2019, GaPSC). More information on how scores are calculated and the consequences of these scores have been included as evidence for Standard 4. As of the 2018-2019 academic year, these scores became consequential, meaning they have an impact on program approval. For the 2018-2019 year, our EPP scored a 4 overall, which is the highest possible score. This is the first and only year these scores have been published, although we should receive a rating for the 2019-2020 academic year this fall. Data have been included for the EPP as a whole and for each program (see supporting evidence below).

As part of this data, both in-program and outcome measures have been included, although the outcome measures are the significant measures for Standard 4, as they relate solely to completers. As part of this data, reviewers will see, as well, the employment context for both the EPP and programs. This illustrates employment percentages in Georgia Public Schools and Employment by Area in our state. In terms of program scores, the following programs received a score of 4: Elementary and Special Education, Middle Grades, English Education, History Education, Science Education, Kinesiology with Teacher Certification (Health and Physical Education), and Post Baccalaureate/MAT. The Mathematics Education program scored a 3, and the following programs did not have enough data for an overall score due to having a low N: Art Education and Music Education. As noted in the data from the GaDOE described above, these scores corroborate that our graduates are performing well in the classroom. Again, their TAPS data, which covers the state's ten standards for teaching, has consistently been strong during the induction years. We hope that this is a result of the fact that their course objectives and content, assessments, and field and clinical experience assignments are aligned to state requirements, InTASC standards, and TAPS requirements. In fact, the Candidate Assessment on Performance Standards (CAPS) EPP-wide assessment that faculty and contract supervisors utilize as a summative field and clinical placement assessment is based on the TAPS. By the time graduates leave our program, they are very familiar with the state TAPS rubric, as they are evaluated using this same rubric by both their mentor teachers and university supervisors. Additionally, they self-evaluate throughout the program, and these standards become a part of their daily lives starting in the junior year.

As seen here, we have multiple measures to demonstrate that our completers are contributing "to an expected level of student-learning growth," and, in many cases, they are surpassing these expected levels. Their performance as first-year teachers is strong, consistently scoring at a level 3 and sometimes 4 across all data measures. These measures illustrate both their impact on P-12 student learning and development and they are indicators of their teaching effectiveness.

Supporting Evidence:

2. Indicators of Teaching Effectiveness (Component 4.2)

Measure 2. Indicators of Teaching Effectiveness (Component 4.2)

The provider demonstrates that completers effectively apply the professional knowledge, skills, and dispositions that the preparation experiences were designed to achieve. 

The University of North Georgia’s Educator Preparation Provider (EPP) uses several measures to demonstrate “the impact of its completers on P-12 student learning and development, classroom instruction, and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation” (CAEP Standard 4). In terms of component 4.2, the goals are to document our completers’ teaching effectiveness.

As an initial piece of evidence provided here to meet these aspects of the standard, we have data from surveys developed by the Georgia Professional Standards Commission (GaPSC) provided to inductee or first-year teachers and the employers of these educators (see “Induction Teacher and Leader Surveys and Employer Surveys” in the supporting evidence below). The GaPSC sends a survey to each EPP’s alumni and their employers one year after they have completed their initial certification and have been employed in a public P-12 school in Georgia. These surveys are based on InTASC standards, and they measure perceived comfort/skill regarding meeting these standards in the classroom (the questions have been provided in the evidence along with the data). These surveys are an indirect measure of completers’ impact on learning and development, classroom instruction, and overall effectiveness in that direct student data is not included in these surveys. They are a subjective measure of how completers measure, assess, and reflect on their own skills and their perceived impact on their classrooms and students and how their employers view the completers’ impact on and overall effectiveness in regard to classrooms and students.

The results from these surveys, though, do provide us a means to triangulate data when we see trends and patterns across surveys and then via additional measures. This affords us a way to utilize multiple measures to determine where trends exist. While analysis of these surveys has been provided in the supporting evidence below, patterns that emerged from the surveys indicated that these first-year educators need additional training in the differentiation of instruction and classroom management. Moreover, the need for an increased focus on literacy was noted as a pattern. These three main areas were seen in employer and completer surveys, and they were emphasized via other data measures, which will be discussed below. Overall, the results indicated that inductees were confident in their skills and approaches to teaching, instruction, and assessment, along with their use of technology and their understandings of diversity. They also noted that they felt confident in terms of their interactions and professional behaviors as educators.

As another and perhaps more significant measure, we received Teacher Keys Effectiveness System (TKES) data from the Georgia Department of Education (GaDOE) on completers’ Teacher Effectiveness Measure (TEM) Scores, Teacher Assessment Performance System (TAPS) scores, Professional Growth ratings, and student growth measures, including Student Growth Percentile (SGP) scores, Student Growth Ratings (SGR), and non-Student Growth Ratings (non-SGR) (see supporting evidence below). To explain this evaluation system, in the state of Georgia, the Teacher Keys Effectiveness System (TKES), which is used as a performance system for all public-school educators, “is comprised of three components which contribute to an overall Teacher Effectiveness Measure (TEM): Teacher Assessment on Performance Standards (TAPS), Professional Growth, and Student Growth” (GaDOE, Georgia’s Teacher Keys Effectiveness System: Implementation Handbook, p. 5). As for student growth, this is based on Student Learning Objectives (SLOs), which are “the measure of student growth for non-state-tested subjects. The aggregate measure of SLO performance for all non-tested courses taught by a teacher will be used in calculating his or her TEM” (GaDOE, TEM Scoring Guide and Methodology). Additionally, these scores include Student Growth Percentiles (SGPs), which are “the measure of student growth for core state-tested subjects” (GaDOE, TEM Scoring Guide and Methodology).

The GaDOE’s supplemental descriptions for determining these scores are included as evidence here as well. In terms of the TAPS data, we received a separate breakdown of first-year teachers’ scores by each of the ten standards on which they are evaluated, including the following: (1) Professional Knowledge, (2) Instructional Planning, (3) Instructional Strategies, (4) Differentiated Instruction, (5) Assessment Strategies, (6) Assessment Uses, (7) Positive Learning Environment, (8) Academically Challenging Environment, (9) Professionalism, and (10) Communication. Supervisors evaluate teachers each year on all ten tap standards, and this evaluation includes walk-throughs, formal observations, and documentation/artifacts provided to supervisors over the course of the year. For our evidence here, we have included the TKES data in several forms including the following: (1) a three-year aggregate for the Educator Preparation Provider’s (EPP) initial programs as a whole, (2) a breakdown by year for three cycles for the EPP’s initial programs as a whole, and (3) a breakdown by year for three cycles by the program (E66).

This data, which is included as an evidence piece for Standard 4, demonstrates that the overwhelming majority of our completers for the past three years have scored at a level 3 (proficient) out of 4 on all aspects, which is where we hope they will be as induction-level, or new, teachers. For the overall TEM scores, the majority scored at a level 3, and for the SGPs, the majority scored at a level 3, with 11.71 % scoring at a level 2. With student growth ratings (SGR), the majority scored at a level 3, with 8.40 scoring at a level 2, and with non-Student Growth Percentiles, the majority was at a level 3, with 6.34 scoring at a level 2. In terms of TAPS data, again, the majority scored at a level 3, with a small percentage scoring at a level 2 for Academically Challenging Environment, Positive Learning Environment, Instructional Strategies, and Differentiated Instruction. While these percentages scoring at a level 2 were small (5.23, 3.03, 3.14, and 3.77, respectively), we will keep these areas in mind as we continue to revise courses, key assessments, and field and clinical placement requirements to ensure that our completers are as prepared to have the highest possible positive impact on the P-12 students with whom they work. With regard to TAPS, while the majority scored at a level 3, as noted above, 20.92 % scored a level 4 on Academically Challenging Environments, and 19.35 scored a level 4 on Professionalism. For induction-level teachers, they surpassed expectations by achieving the highest possible scores in these areas. When looking across programs, the Elementary and Special Education program showed the most variation in scoring, with more induction teachers scoring below a 3; however, the majority still scored at a level 3. This variation is not surprising considering that this program produces the most alumni, and thus there is room for more variation amongst inductee teachers. The TAPS data are an excellent indicator of both impact on students, as they directly address planning, teaching, and assessment, and overall effectiveness, as they cover a wide range of overall responsibilities in regard to the role of a teacher, from student impact to professionalism.

We chose to supplement and triangulate this data with a case study to demonstrate impact and effectiveness, for which we interviewed completers across all programs (37 inductee teachers/completers, 14 advanced completers), their employers (37 administrators), and veteran teachers who worked with inductees as mentors in their first or induction-level year (9 mentor teachers). The entire case study, including methodology, data, analysis, and interview protocols/questions, has been included here as evidence, but we will address some highlights here (see supporting evidence below). There were multiple codes that emerged from the interviews, including the following: (1) relationship building, (2) academic growth, (3) classroom management, (4) collaboration, (5) communication, (6) content knowledge, (7) data and assessment, (8) differentiation, (9) diversity, (10) field placement experience, (11) pedagogy, (12) professional dispositions, (13) social growth of P-12 students, and (14) technology. For each of these areas, the case study highlights sub-codes from inductee teachers, administrators, and veteran teachers acting as mentors. For each sub-code, the case study covers strengths, areas for improvement, suggestions, and overall implications/summaries. In all areas, the feedback was overwhelmingly positive regarding inductees’ skills in terms of pedagogy and their overall impact and effectiveness on their P-12 students. As mirrored in the employer and inductee surveys, classroom management, differentiation, and literacy skills emerged as areas in which inductees could use additional professional development and/or could have used additional or enhanced preparation while in their initial programs. Administrators noted, however, that these are all areas in which they see a need for improvement with veteran teachers as well, and they were not specific to completers from our EPP. In fact, literacy has become a state initiative, as this is a recognized need across the board.  As one administrator noted, “I mean, I feel like there’s not one university out there that actually teaches reading the way it’s supposed to be taught. I mean, just from my own personal experience.” With this in mind, we are developing a literacy pilot project as part of a state initiative to integrate additional reading content and/or courses earlier within our programs. Also, we have had one speaker provide professional development as related to dyslexia and techniques for early literacy and reading comprehension, and we are currently working on embedding content related to reading, comprehension, and dyslexia into our programs. Additionally, based on these interviews, we plan to add additional information to our programs regarding the development of Individualized Education Plans (IEPs) and behavior management plans. While all students receive this information, these were other areas in which the need for increased emphasis was noted. Our graduates were praised for their ability to build relationships with students, parents, community members, and colleagues, and their abilities to engage students in the classroom were highlighted throughout the interviews. As one administrator noted in regard to student engagement, “I think North Georgia is giving them the skills to do that and not line up 25 desks in a row and sit and teach in the front...They realize you’ve got to keep the kids engaged.” Interviews also highlighted completers’ use of technology as a teaching tool, as a data tool, and as a tool for inciting critical thinking. As one administrator stated of an inductee teacher, “I remember going in there, going, ‘Wow. It’s amazing!’ She uses technology not just as tool to make sure students are proficient with skills, but she is guiding them to create their own stuff. She has them doing electronic portfolios to share. It was just amazing to watch.” In looking at from inductees across programs, there were no major differences in terms of skills across the areas listed above, but again, the majority of comments for all areas were positive. In fact, most of the areas for improvement seemed to be geared to one particular inductee rather than emerging as patterns.

As seen here, we have multiple measures to demonstrate that our completers are effective teachers. Their performance as first-year teachers is strong, consistently scoring at a level 3 and sometimes 4 across all data measures. These measures illustrate both their impact on P-12 student learning and development and they are indicators of their teaching effectiveness.

Supporting Evidence:

3. Satisfaction of Employers and Employment Milestones (Component 4.3| A.4.1)

Measure 3. Satisfaction of Employers and Employment Milestones (Component 4.3| A.4.1)

The provider demonstrates that employers are satisfied with completers’ preparation and that completers reach employment milestones such as promotion and retention. 

The University of North Georgia’s Educator Preparation Provider (EPP) uses several measures to demonstrate “the impact of its completers on P-12 student learning and development, classroom instruction, and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation” (CAEP Standard 4). In terms of component 4.3, the goals are to document our employers’ satisfaction and our completers’ employment milestones.

As an initial piece of evidence provided here to meet these aspects of the standard, we have data from surveys developed by the Georgia Professional Standards Commission (GaPSC) provided the employers of first-year teachers (see “Employer Surveys” in the supporting evidence below). The GaPSC sends a survey to each employer one year after the alumni have completed their initial certification and have been employed in a public P-12 school in Georgia. These surveys are based on InTASC standards, and they measure perceived comfort/skill regarding meeting these standards in the classroom (the questions have been provided in the evidence along with the data).

The results from these surveys provide us a means to triangulate data when we see trends and patterns across surveys and then via additional measures. This affords us a way to utilize multiple measures to determine where trends exist. While analysis of these surveys has been provided in the supporting evidence below, patterns that emerged from the surveys indicated that these first-year educators need additional training in the differentiation of instruction and classroom management. Moreover, the need for an increased focus on literacy was noted as a pattern. These three main areas were seen in employer and completer surveys, and they were emphasized via other data measures, which will be discussed below.

We chose to supplement and triangulate this data with a case study to demonstrate impact and effectiveness, for which we interviewed completers across all programs (37 inductee teachers/completers, 14 advanced completers), their employers (37 administrators), and veteran teachers who worked with inductees as mentors in their first or induction-level year (9 mentor teachers). The entire case study, including methodology, data, analysis, and interview protocols/questions, has been included here as evidence, but we will address some highlights here (see supporting evidence below). There were multiple codes that emerged from the interviews, including the following: (1) relationship building, (2) academic growth, (3) classroom management, (4) collaboration, (5) communication, (6) content knowledge, (7) data and assessment, (8) differentiation, (9) diversity, (10) field placement experience, (11) pedagogy, (12) professional dispositions, (13) social growth of P-12 students, and (14) technology. For each of these areas, the case study highlights sub-codes from inductee teachers, administrators, and veteran teachers acting as mentors. For each sub-code, the case study covers strengths, areas for improvement, suggestions, and overall implications/summaries. In all areas, the feedback was overwhelmingly positive regarding inductees’ skills in terms of pedagogy and their overall impact and effectiveness on their P-12 students. As mirrored in the employer and inductee surveys, classroom management, differentiation, and literacy skills emerged as areas in which inductees could use additional professional development and/or could have used additional or enhanced preparation while in their initial programs. Administrators noted, however, that these are all areas in which they see a need for improvement with veteran teachers as well, and they were not specific to completers from our EPP. In fact, literacy has become a state initiative, as this is a recognized need across the board.  As one administrator noted, “I mean, I feel like there’s not one university out there that actually teaches reading the way it’s supposed to be taught. I mean, just from my own personal experience.” With this in mind, we are developing a literacy pilot project as part of a state initiative to integrate additional reading content and/or courses earlier within our programs. Also, we have had one speaker provide professional development as related to dyslexia and techniques for early literacy and reading comprehension, and we are currently working on embedding content related to reading, comprehension, and dyslexia into our programs. Additionally, based on these interviews, we plan to add additional information to our programs regarding the development of Individualized Education Plans (IEPs) and behavior management plans. While all students receive this information, these were other areas in which the need for increased emphasis was noted. Our graduates were praised for their ability to build relationships with students, parents, community members, and colleagues, and their abilities to engage students in the classroom were highlighted throughout the interviews. As one administrator noted in regard to student engagement, “I think North Georgia is giving them the skills to do that and not line up 25 desks in a row and sit and teach in the front. ... They realize you’ve got to keep the kids engaged.” Interviews also highlighted completers’ use of technology as a teaching tool, as a data tool, and as a tool for inciting critical thinking. As one administrator stated of an inductee teacher, “I remember going in there, going, ‘Wow. It’s amazing!’ She uses technology not just as tool to make sure students are proficient with skills, but she is guiding them to create their own stuff. She has them doing electronic portfolios to share. It was just amazing to watch.” In looking at from inductees across programs, there were no major differences in terms of skills across the areas listed above, but again, the majority of comments for all areas were positive. In fact, most of the areas for improvement seemed to be geared to one particular inductee rather than emerging as patterns.

Overall, employers were very satisfied with the new teachers graduating from the University of North Georgia, as evidenced in our case study report and employer survey. This is reflected in the hiring and retention rates for our graduates and first-year teachers. Reports across the nation disagree in terms of teacher retention, but there are reports indicating that as many as 50% of teachers leave in the first five to seven years and 20% leave within the first one to three years (National Education Association, http://www.nea.org/tools/17054.htm). Other sources cite 40% of teachers leaving within the first five years (McLaughlin, 2018). As García and Weiss (2019) indicated, the number of college students selecting education as a major is dropping, as well, and this impacts the teacher pipeline. This directly affects EPPs, as many individuals now selecting education as a field are opting for alternative certification programs (García & Weiss, 2019).

We have managed to retain our numbers in our EPP, an effort of which we are extremely proud, given falling numbers across our state in EPPs. Additionally, our graduates tend to remain in the field of education at higher rates than those graduates of many of our sister institutions in the state of Georgia and when compared to national rates. As seen in our Employment and Retention Rates evidence piece, 84.4 percent of graduates from UNG are still employed in the public-school sector after four years (see supporting evidence). Additionally, these numbers do not account for graduates who may move out of state or who may be teaching in the private sector. In looking at our retention rates by program, these are lower for areas such as art, music, and physical education, but this could be due to the availability of positions in these fields. As many schools may have only one or two available positions or, in some cases, may share one position across schools, positions in these fields are less in number and many of our students take alternate employment until they can find a position teaching in their fields.

Supporting Evidence:

4. Satisfaction of Completers (Components 4.4 | A.4.2)

Measure 4. Satisfaction of Completers (Components 4.4 | A.4.2)

The provider demonstrates that program completers perceive their preparation as relevant to the responsibilities they confront on the job, and that the preparation was effective.  

The University of North Georgia’s Educator Preparation Provider (EPP) uses several measures to demonstrate “the impact of its completers on P-12 student learning and development, classroom instruction, and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation” (CAEP Standard 4). In terms of component 4.4 and the satisfaction of completers, our evidence demonstrates that our graduates find their preparation to be relevant to and effective for their positions as educators.

The most significant piece of evidence to document our completers satisfaction is our case study report. For this study, we interviewed 37 completers from across all programs to determine their impact on P-12 students' learning and development but also how their EPP prepared them for these positions and where they saw room for improvement. Overwhelmingly, completers noted that they were satisfied with their preparation, and again, the major areas of improvement centered on additional preparation in terms of differentiation and classroom management and preparation regarding literacy. In particular, they noted that they were strong in terms of their abilities to forge relationships with students, parents, and colleagues. They also indicated that the level of field and clinical experiences prepared them to enter their new positions with increased confidence, and the level of experience provided beforehand allowed them to enter the classroom with a better understanding of (1) the importance of data to the job of a teacher, (2) flexibility as an educator, and (3) relationships based on getting to know one's students.

This satisfaction was reflected, as well, in the inductee surveys provided by the GaPSC, which demonstrate overall confidence in planning, instruction, assessment, and professional behaviors. Again, these surveys mirror the case study, with an emphasis on differentiation and classroom management. In particular, inductees expressed the need for additional knowledge on working with the various learners in their classrooms, including students who are gifted, at-risk, diverse, English Language Learners, and individuals with special needs. While all of our students are required to have diverse placements, we plan to integrate more information into classes on differentiation, particularly as related to the diversity of learners. In conclusion, we believe the satisfaction of completers is evidenced in our retention numbers as related to the first years of teaching. The fact that our graduates tend to remain in the field is a testament to their strength as teachers and to their satisfaction with their employment and the preparation that helped them to get there. This is evidenced, too, in the fact that the majority of our advanced-level M.Ed. candidates and many of our Ed.L. candidates also attained their initial degrees at the University of North Georgia, and they return to advance their education and their careers.

Supporting Evidence:

5. Graduation Rates ( Initial & Advanced Levels)

Measure 5. Graduation Rates (Initial & Advanced Levels)

The graduation data included in Table 11 were collected through the University of North Georgia Banner System. To determine graduation rates, the graduation term data was compared to the admissions term data collected in the College of Education Program Admissions Office. Table 11 includes the graduation rates for all initial and advanced level programs.

Supporting Evidence:

6. Ability of Completers to Meet Licensing (Certification) and Any Additional State Requirements; Title II (Initial & Advanced Levels)

The University of North Georgia’s Educator Preparation Provider (EPP) uses several measures to demonstrate the ability of completers to meet licensing (certification) and state requirements. In this section, we present our Title II reporting and Georgia Professional Standards Commission Preparation Program Effectiveness Measures as supporting evidence.

Recently, the Georgia Professional Standards Commission (GaPSC) began publishing scores for Educator Preparation Providers (EPPs) across the state. The EPP as a whole is scored, as is each program (programs are combined for the aggregate score). The PPEM scores are based on in-program measures (50 percent), including the edTPA scores (30 percent) and the Georgia Assessments for the Certification of Educators (GACE) content scores (20 percent), and the other 50 percent is based on outcome measures that come into play once completers are in their first year of teaching. These outcome measures stem from the EPP completers' TAPS scores (30%), the GaPSC employer survey (10%), and the GaPSC inductee/first-year teacher survey (10%), all of which have been mentioned previously in this narrative. These five measures "generate one of four possible overall ratings for a provider or program. Level 3 is the expected standard of performance for programs and providers in the state, with about three in four providers earning this rating in prior years for which the calculation was modeled" (Georgia's TPPEM_Academic Year 2018-2019, GaPSC). The scores are calculated utilizing an aggregate of three years of data for all programs (Georgia's TPPEM_Academic Year 2018-2019, GaPSC). More information on how scores are calculated and the consequences of these scores have been included the supporting evidence below. As of the 2018-2019 academic year, these scores became consequential, meaning they have an impact on program approval. For the 2018-2019 year, our EPP scored a 4 overall, which is the highest possible score. This is the first and only year these scores have been published, although we should receive a rating for the 2019-2020 academic year this fall. Data have been included for the EPP as a whole and for each program (see supporting evidence below).

Supporting Evidence:

7. Ability of Completers to be Hired in Education Positions for Which they Have Prepared (Initial & Advanced Levels)

Measure 7. Ability of Completers to be Hired in Education Positions for Which they Have Prepared (Initial & Advanced Levels)

The University of North Georgia’s Educator Preparation Provider (EPP) uses several measures to demonstrate the ability of completers to be hired in education positions for which they have prepared. In this section, we present our Employment and Retention Rates as supporting evidence.

As seen in the Employment and Retention Rates supporting evidence (below in Tables 1 to 3), the majority of graduates from the University of North Georgia earn positions upon graduating and are employed one year after graduation in a public school in Georgia. It is important to note that this does not account for graduates who may leave the state of Georgia or who may find employment in the private school sector in Georgia. Unfortunately, we do not have a tracking mechanism for these students.

Additionally, data for the Early Childhood Education (ECE) program are presented only for one year—this was an evening program that was discontinued after academic year 2015-2016. Similarly, the Educational Leadership program did not have graduates until 2017, so this data is not included for the two years prior. Programs that illustrate lower employment rates are to be expected, and these include history education, art education, music education, and physical education or kinesiology. There are fewer positions available in these fields, and thus these positions are more competitive and more difficult to attain across the state. For example, in the fields of art, music, and physical education, there tend to be only one or two positions at most elementary and middle schools. These areas are reflected, as well, in the Master of Arts in Teaching and Post Baccalaureate program, thus impacting these overall numbers in regard to overall employment.

Supporting Evidence:

8. Student Loan Default Rates and Other Consumer Information (Initial & Advanced Levels)

Measure 8. Student Loan Default Rates and Other Consumer Information (Initial & Advanced Levels)

Supporting Evidence:

Program Quality

Information on education salaries in Georgia and GaPSC tiered certification

UNG follows Section 508 Standards and WCAG 2.0 for web accessibility. If you require the content on this web page in another format, please contact the ADA Coordinator.

Back to Top