Back to Top
Skip to Site Search Skip to Utility Nav Skip to Top Nav Skip to Left Nav Skip to Content
Close Main Menu

Accreditation and Reporting

The University of North Georgia’s College of Education is fully accredited by the National Council for Accreditation of Teacher Education (NCATE). In July 2013, the consolidation of NCATE and the Teacher Education Accreditation Council (TEAC) resulted in the creation of the Council for the Accreditation of Educator Preparation (CAEP) – a new, national accreditor for educator preparation programs. UNG’s Educator Preparation Provider (EPP) is a College of Education unit consisting of both undergraduate and graduate teacher preparation programs that collaborate on the design, delivery, approval, and accreditation of all programs. You will find information about our current CAEP review, programs, and annual reporting measures on this page.

Solicitation of Third Party Comments

The College of Education at the University of North Georgia is hosting an accreditation visit by the Council for the Accreditation of Educator Preparation (CAEP) on May 2-4, 2021. Interested parties are invited to submit third-party comments to the evaluation team. Please note that comments must address substantive matters related to the quality of professional education programs offered, and should specify the party's relationship to the provider (i.e., graduate, present or former faculty member, employer of graduates).

We invite you to submit written testimony to:

CAEP
1140 19th Street NW, Suite 400
Washington, DC 20036
Or by email to: callforcomments@caepnet.org

Initial and Advanced Educator Preparation Programs

Council for the Accreditation of Educator Preparation Initial Teacher Preparation and Advanced Programs Under Review, 2020-2021
Program Name Degree Level
Art Education Baccalaureate
Biology Education Baccalaureate
Chemistry Education Baccalaureate
Curriculum and Instruction Master's
Elementary and Special Education Baccalaureate
English Education Baccalaureate
History Education Baccalaureate
Kinesiology with Teacher Certification Baccalaureate
Mathematics Education Baccalaureate
Middle Grades Education Baccalaureate
Music Education Baccalaureate
Physics Education Baccalaureate
Post-Baccalaureate and Master of Arts in Teaching Post-Baccalaureate
Tier I Educational Leadership Post-Master's
Tier II Educational Leadership Specialist or Certificate of Advanced Study
National Council for Accreditation of Teacher Education Approved Programs, 2012
Program Name Degree Level
Early Childhood Education Baccalaureate
Early Childhood/Special Education Baccalaureate
Middle Grades Education Baccalaureate
Special Education Baccalaureate
Secondary English Baccalaureate
Secondary Math Baccalaureate
Secondary Science/Bio/Chem/Physics Baccalaureate
Secondary History/Social Science Baccalaureate
P-12 Foreign Language Baccalaureate
P-12 Health & Physical Education Baccalaureate
P-12 Art Baccalaureate
P-12 Music Baccalaureate
Post-Baccalaureate - Secondary Education Baccalaureate
Early Childhood Education Master's
Middle Grades Education Master's
Special Education Master's
Secondary English Master's
Secondary Math Master's
Secondary Science/Bio/Chem/Physics Master's
Secondary History/Social Science Master's
P-12 Health & Physical Education Master's
P-12 Art Master's
Education Specialist in Teacher Leadership Post-Master's
Educational Leadership Add-on L-6
Teacher Support Specialist (TSS) Endorsement
English as a Second Language (ESOL) Endorsement
Gifted Endorsement
ECE-Reading Endorsement
ECE-Math Endorsement

CAEP Annual Reporting Measures

1. Impact on P-12 Learning and Development (Component 4.1)

Measure 1. Impact on P-12 Learning and Development (Component 4.1)

The provider documents multiple growth measures that completers contribute to student learning.  

The University of North Georgia’s Educator Preparation Provider (EPP) uses several measures to demonstrate “the impact of its completers on P-12 student learning and development, classroom instruction, and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation” (CAEP Standard 4). In terms of 4.1, the goals are to document our completers’ impact on student learning and development.

As an initial piece of evidence provided here to meet these aspects of the standard, we have data from surveys developed by the Georgia Professional Standards Commission (GaPSC) provided to inductee or first-year teachers and the employers of these educators (see “Induction Teacher and Leader Surveys and Employer Surveys” in the supporting evidence below). The GaPSC sends a survey to each EPP’s alumni and their employers one year after they have completed their initial certification and have been employed in a public P-12 school in Georgia (E45). These surveys are based on InTASC standards, and they measure perceived comfort/skill regarding meeting these standards in the classroom (the questions have been provided in the evidence along with the data). These surveys are an indirect measure of completers’ impact on learning and development, classroom instruction, and overall effectiveness in that direct student data is not included in these surveys. They are a subjective measure of how completers measure, assess, and reflect on their own skills and their perceived impact on their classrooms and students and how their employers view the completers’ impact on and overall effectiveness in regard to classrooms and students.

The results from these surveys, though, do provide us a means to triangulate data when we see trends and patterns across surveys and then via additional measures. This affords us a way to utilize multiple measures to determine where trends exist. While analysis of these surveys has been provided in the supporting evidence below, patterns that emerged from the surveys indicated that these first-year educators need additional training in the differentiation of instruction and classroom management. Moreover, the need for an increased focus on literacy was noted as a pattern. These three main areas were seen in employer and completer surveys, and they were emphasized via other data measures, which will be discussed below. Overall, the results indicated that inductees were confident in their skills and approaches to teaching, instruction, and assessment, along with their use of technology and their understandings of diversity. They also noted that they felt confident in terms of their interactions and professional behaviors as educators.

As another and perhaps more significant measure, we received Teacher Keys Effectiveness System (TKES) data from the Georgia Department of Education (GaDOE) on completers’ Teacher Effectiveness Measure (TEM) Scores, Teacher Assessment Performance System (TAPS) scores, Professional Growth ratings, and student growth measures, including Student Growth Percentile (SGP) scores, Student Growth Ratings (SGR), and non-Student Growth Ratings (non-SGR) (see supporting evidence below). To explain this evaluation system, in the state of Georgia, the Teacher Keys Effectiveness System (TKES), which is used as a performance system for all public-school educators, “is comprised of three components which contribute to an overall Teacher Effectiveness Measure (TEM): Teacher Assessment on Performance Standards (TAPS), Professional Growth, and Student Growth” (GaDOE, Georgia’s Teacher Keys Effectiveness System: Implementation Handbook, p. 5). As for student growth, this is based on Student Learning Objectives (SLOs), which are “the measure of student growth for non-state-tested subjects. The aggregate measure of SLO performance for all non-tested courses taught by a teacher will be used in calculating his or her TEM” (GaDOE, TEM Scoring Guide and Methodology). Additionally, these scores include Student Growth Percentiles (SGPs), which are “the measure of student growth for core state-tested subjects” (GaDOE, TEM Scoring Guide and Methodology).

The GaDOE’s supplemental descriptions for determining these scores are included as evidence here as well. In terms of the TAPS data, we received a separate breakdown of first-year teachers’ scores by each of the ten standards on which they are evaluated, including the following: (1) Professional Knowledge, (2) Instructional Planning, (3) Instructional Strategies, (4) Differentiated Instruction, (5) Assessment Strategies, (6) Assessment Uses, (7) Positive Learning Environment, (8) Academically Challenging Environment, (9) Professionalism, and (10) Communication. Supervisors evaluate teachers each year on all ten tap standards, and this evaluation includes walk-throughs, formal observations, and documentation/artifacts provided to supervisors over the course of the year. For our evidence here, we have included the TKES data in several forms including the following: (1) a three-year aggregate for the Educator Preparation Provider’s (EPP) initial programs as a whole, (2) a breakdown by year for three cycles for the EPP’s initial programs as a whole, and (3) a breakdown by year for three cycles by the program (E66).

This data, which is included as an evidence piece for Standard 4, demonstrates that the overwhelming majority of our completers for the past three years have scored at a level 3 (proficient) out of 4 on all aspects, which is where we hope they will be as induction-level, or new, teachers. For the overall TEM scores, the majority scored at a level 3, and for the SGPs, the majority scored at a level 3, with 11.71 % scoring at a level 2. With student growth ratings (SGR), the majority scored at a level 3, with 8.40 scoring at a level 2, and with non-Student Growth Percentiles, the majority was at a level 3, with 6.34 scoring at a level 2. In terms of TAPS data, again, the majority scored at a level 3, with a small percentage scoring at a level 2 for Academically Challenging Environment, Positive Learning Environment, Instructional Strategies, and Differentiated Instruction. While these percentages scoring at a level 2 were small (5.23, 3.03, 3.14, and 3.77, respectively), we will keep these areas in mind as we continue to revise courses, key assessments, and field and clinical placement requirements to ensure that our completers are as prepared to have the highest possible positive impact on the P-12 students with whom they work. With regard to TAPS, while the majority scored at a level 3, as noted above, 20.92 % scored a level 4 on Academically Challenging Environments, and 19.35 scored a level 4 on Professionalism. For induction-level teachers, they surpassed expectations by achieving the highest possible scores in these areas. When looking across programs, the Elementary and Special Education program showed the most variation in scoring, with more induction teachers scoring below a 3; however, the majority still scored at a level 3. This variation is not surprising considering that this program produces the most alumni, and thus there is room for more variation amongst inductee teachers. The TAPS data are an excellent indicator of both impact on students, as they directly address planning, teaching, and assessment, and overall effectiveness, as they cover a wide range of overall responsibilities in regard to the role of a teacher, from student impact to professionalism.

We chose to supplement and triangulate this data with a case study to demonstrate impact and effectiveness, for which we interviewed completers across all programs (37 inductee teachers/completers, 14 advanced completers), their employers (37 administrators), and veteran teachers who worked with inductees as mentors in their first or induction-level year (9 mentor teachers). The entire case study, including methodology, data, analysis, and interview protocols/questions, has been included here as evidence, but we will address some highlights here (see supporting evidence below). There were multiple codes that emerged from the interviews, including the following: (1) relationship building, (2) academic growth, (3) classroom management, (4) collaboration, (5) communication, (6) content knowledge, (7) data and assessment, (8) differentiation, (9) diversity, (10) field placement experience, (11) pedagogy, (12) professional dispositions, (13) social growth of P-12 students, and (14) technology. For each of these areas, the case study highlights sub-codes from inductee teachers, administrators, and veteran teachers acting as mentors. For each sub-code, the case study covers strengths, areas for improvement, suggestions, and overall implications/summaries. In all areas, the feedback was overwhelmingly positive regarding inductees’ skills in terms of pedagogy and their overall impact and effectiveness on their P-12 students. As mirrored in the employer and inductee surveys, classroom management, differentiation, and literacy skills emerged as areas in which inductees could use additional professional development and/or could have used additional or enhanced preparation while in their initial programs. Administrators noted, however, that these are all areas in which they see a need for improvement with veteran teachers as well, and they were not specific to completers from our EPP. In fact, literacy has become a state initiative, as this is a recognized need across the board.  As one administrator noted, “I mean, I feel like there’s not one university out there that actually teaches reading the way it’s supposed to be taught. I mean, just from my own personal experience.” With this in mind, we are developing a literacy pilot project as part of a state initiative to integrate additional reading content and/or courses earlier within our programs. Also, we have had one speaker provide professional development as related to dyslexia and techniques for early literacy and reading comprehension, and we are currently working on embedding content related to reading, comprehension, and dyslexia into our programs. Additionally, based on these interviews, we plan to add additional information to our programs regarding the development of Individualized Education Plans (IEPs) and behavior management plans. While all students receive this information, these were other areas in which the need for increased emphasis was noted. Our graduates were praised for their ability to build relationships with students, parents, community members, and colleagues, and their abilities to engage students in the classroom were highlighted throughout the interviews. As one administrator noted in regard to student engagement, “I think North Georgia is giving them the skills to do that and not line up 25 desks in a row and sit and teach in the front. ... They realize you’ve got to keep the kids engaged.” Interviews also highlighted completers’ use of technology as a teaching tool, as a data tool, and as a tool for inciting critical thinking. As one administrator stated of an inductee teacher, “I remember going in there, going, ‘Wow. It’s amazing!’ She uses technology not just as tool to make sure students are proficient with skills, but she is guiding them to create their own stuff. She has them doing electronic portfolios to share. It was just amazing to watch.” In looking at from inductees across programs, there were no major differences in terms of skills across the areas listed above, but again, the majority of comments for all areas were positive. In fact, most of the areas for improvement seemed to be geared to one particular inductee rather than emerging as patterns.

We are also including Georgia's Teacher Preparation Program Effectiveness Measures (PPEMs) in support of component 4.1. Recently, the Georgia Professional Standards Commission (GaPSC) began publishing scores for Educator Preparation Providers (EPPs) across the state. The EPP as a whole is scored, as is each program (programs are combined for the aggregate score). The PPEM scores are based on in-program measures (50 percent), including the edTPA scores (30 percent) and the Georgia Assessments for the Certification of Educators (GACE) content scores (20 percent), and the other 50 percent is based on outcome measures that come into play once completers are in their first year of teaching. These outcome measures stem from the EPP completers' TAPS scores (30%), the GaPSC employer survey (10%), and the GaPSC inductee/first-year teacher survey (10%), all of which have been mentioned previously in this narrative. These five measures "generate one of four possible overall ratings for a provider or program. Level 3 is the expected standard of performance for programs and providers in the state, with about three in four providers earning this rating in prior years for which the calculation was modeled" (Georgia's TPPEM_Academic Year 2018-2019, GaPSC). The scores are calculated utilizing an aggregate of three years of data for all programs (Georgia's TPPEM_Academic Year 2018-2019, GaPSC). More information on how scores are calculated and the consequences of these scores have been included as evidence for Standard 4. As of the 2018-2019 academic year, these scores became consequential, meaning they have an impact on program approval. For the 2018-2019 year, our EPP scored a 4 overall, which is the highest possible score. This is the first and only year these scores have been published, although we should receive a rating for the 2019-2020 academic year this fall. Data have been included for the EPP as a whole and for each program (see supporting evidence below).

As part of this data, both in-program and outcome measures have been included, although the outcome measures are the significant measures for Standard 4, as they relate solely to completers. As part of this data, reviewers will see, as well, the employment context for both the EPP and programs. This illustrates employment percentages in Georgia Public Schools and Employment by Area in our state. In terms of program scores, the following programs received a score of 4: Elementary and Special Education, Middle Grades, English Education, History Education, Science Education, Kinesiology with Teacher Certification (Health and Physical Education), and Post Baccalaureate/MAT. The Mathematics Education program scored a 3, and the following programs did not have enough data for an overall score due to having a low N: Art Education and Music Education. As noted in the data from the GaDOE described above, these scores corroborate that our graduates are performing well in the classroom. Again, their TAPS data, which covers the state's ten standards for teaching, has consistently been strong during the induction years. We hope that this is a result of the fact that their course objectives and content, assessments, and field and clinical experience assignments are aligned to state requirements, InTASC standards, and TAPS requirements. In fact, the Candidate Assessment on Performance Standards (CAPS) EPP-wide assessment that faculty and contract supervisors utilize as a summative field and clinical placement assessment is based on the TAPS. By the time graduates leave our program, they are very familiar with the state TAPS rubric, as they are evaluated using this same rubric by both their mentor teachers and university supervisors. Additionally, they self-evaluate throughout the program, and these standards become a part of their daily lives starting in the junior year.

As seen here, we have multiple measures to demonstrate that our completers are contributing "to an expected level of student-learning growth," and, in many cases, they are surpassing these expected levels. Their performance as first-year teachers is strong, consistently scoring at a level 3 and sometimes 4 across all data measures. These measures illustrate both their impact on P-12 student learning and development and they are indicators of their teaching effectiveness.

Supporting Evidence:

2. Indicators of Teaching Effectiveness (Component 4.2)

Measure 2. Indicators of Teaching Effectiveness (Component 4.2)

The provider demonstrates that completers effectively apply the professional knowledge, skills, and dispositions that the preparation experiences were designed to achieve. 

The University of North Georgia’s Educator Preparation Provider (EPP) uses several measures to demonstrate “the impact of its completers on P-12 student learning and development, classroom instruction, and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation” (CAEP Standard 4). In terms of component 4.2, the goals are to document our completers’ teaching effectiveness.

As an initial piece of evidence provided here to meet these aspects of the standard, we have data from surveys developed by the Georgia Professional Standards Commission (GaPSC) provided to inductee or first-year teachers and the employers of these educators (see “Induction Teacher and Leader Surveys and Employer Surveys” in the supporting evidence below). The GaPSC sends a survey to each EPP’s alumni and their employers one year after they have completed their initial certification and have been employed in a public P-12 school in Georgia. These surveys are based on InTASC standards, and they measure perceived comfort/skill regarding meeting these standards in the classroom (the questions have been provided in the evidence along with the data). These surveys are an indirect measure of completers’ impact on learning and development, classroom instruction, and overall effectiveness in that direct student data is not included in these surveys. They are a subjective measure of how completers measure, assess, and reflect on their own skills and their perceived impact on their classrooms and students and how their employers view the completers’ impact on and overall effectiveness in regard to classrooms and students.

The results from these surveys, though, do provide us a means to triangulate data when we see trends and patterns across surveys and then via additional measures. This affords us a way to utilize multiple measures to determine where trends exist. While analysis of these surveys has been provided in the supporting evidence below, patterns that emerged from the surveys indicated that these first-year educators need additional training in the differentiation of instruction and classroom management. Moreover, the need for an increased focus on literacy was noted as a pattern. These three main areas were seen in employer and completer surveys, and they were emphasized via other data measures, which will be discussed below. Overall, the results indicated that inductees were confident in their skills and approaches to teaching, instruction, and assessment, along with their use of technology and their understandings of diversity. They also noted that they felt confident in terms of their interactions and professional behaviors as educators.

As another and perhaps more significant measure, we received Teacher Keys Effectiveness System (TKES) data from the Georgia Department of Education (GaDOE) on completers’ Teacher Effectiveness Measure (TEM) Scores, Teacher Assessment Performance System (TAPS) scores, Professional Growth ratings, and student growth measures, including Student Growth Percentile (SGP) scores, Student Growth Ratings (SGR), and non-Student Growth Ratings (non-SGR) (see supporting evidence below). To explain this evaluation system, in the state of Georgia, the Teacher Keys Effectiveness System (TKES), which is used as a performance system for all public-school educators, “is comprised of three components which contribute to an overall Teacher Effectiveness Measure (TEM): Teacher Assessment on Performance Standards (TAPS), Professional Growth, and Student Growth” (GaDOE, Georgia’s Teacher Keys Effectiveness System: Implementation Handbook, p. 5). As for student growth, this is based on Student Learning Objectives (SLOs), which are “the measure of student growth for non-state-tested subjects. The aggregate measure of SLO performance for all non-tested courses taught by a teacher will be used in calculating his or her TEM” (GaDOE, TEM Scoring Guide and Methodology). Additionally, these scores include Student Growth Percentiles (SGPs), which are “the measure of student growth for core state-tested subjects” (GaDOE, TEM Scoring Guide and Methodology).

The GaDOE’s supplemental descriptions for determining these scores are included as evidence here as well. In terms of the TAPS data, we received a separate breakdown of first-year teachers’ scores by each of the ten standards on which they are evaluated, including the following: (1) Professional Knowledge, (2) Instructional Planning, (3) Instructional Strategies, (4) Differentiated Instruction, (5) Assessment Strategies, (6) Assessment Uses, (7) Positive Learning Environment, (8) Academically Challenging Environment, (9) Professionalism, and (10) Communication. Supervisors evaluate teachers each year on all ten tap standards, and this evaluation includes walk-throughs, formal observations, and documentation/artifacts provided to supervisors over the course of the year. For our evidence here, we have included the TKES data in several forms including the following: (1) a three-year aggregate for the Educator Preparation Provider’s (EPP) initial programs as a whole, (2) a breakdown by year for three cycles for the EPP’s initial programs as a whole, and (3) a breakdown by year for three cycles by the program (E66).

This data, which is included as an evidence piece for Standard 4, demonstrates that the overwhelming majority of our completers for the past three years have scored at a level 3 (proficient) out of 4 on all aspects, which is where we hope they will be as induction-level, or new, teachers. For the overall TEM scores, the majority scored at a level 3, and for the SGPs, the majority scored at a level 3, with 11.71 % scoring at a level 2. With student growth ratings (SGR), the majority scored at a level 3, with 8.40 scoring at a level 2, and with non-Student Growth Percentiles, the majority was at a level 3, with 6.34 scoring at a level 2. In terms of TAPS data, again, the majority scored at a level 3, with a small percentage scoring at a level 2 for Academically Challenging Environment, Positive Learning Environment, Instructional Strategies, and Differentiated Instruction. While these percentages scoring at a level 2 were small (5.23, 3.03, 3.14, and 3.77, respectively), we will keep these areas in mind as we continue to revise courses, key assessments, and field and clinical placement requirements to ensure that our completers are as prepared to have the highest possible positive impact on the P-12 students with whom they work. With regard to TAPS, while the majority scored at a level 3, as noted above, 20.92 % scored a level 4 on Academically Challenging Environments, and 19.35 scored a level 4 on Professionalism. For induction-level teachers, they surpassed expectations by achieving the highest possible scores in these areas. When looking across programs, the Elementary and Special Education program showed the most variation in scoring, with more induction teachers scoring below a 3; however, the majority still scored at a level 3. This variation is not surprising considering that this program produces the most alumni, and thus there is room for more variation amongst inductee teachers. The TAPS data are an excellent indicator of both impact on students, as they directly address planning, teaching, and assessment, and overall effectiveness, as they cover a wide range of overall responsibilities in regard to the role of a teacher, from student impact to professionalism.

We chose to supplement and triangulate this data with a case study to demonstrate impact and effectiveness, for which we interviewed completers across all programs (37 inductee teachers/completers, 14 advanced completers), their employers (37 administrators), and veteran teachers who worked with inductees as mentors in their first or induction-level year (9 mentor teachers). The entire case study, including methodology, data, analysis, and interview protocols/questions, has been included here as evidence, but we will address some highlights here (see supporting evidence below). There were multiple codes that emerged from the interviews, including the following: (1) relationship building, (2) academic growth, (3) classroom management, (4) collaboration, (5) communication, (6) content knowledge, (7) data and assessment, (8) differentiation, (9) diversity, (10) field placement experience, (11) pedagogy, (12) professional dispositions, (13) social growth of P-12 students, and (14) technology. For each of these areas, the case study highlights sub-codes from inductee teachers, administrators, and veteran teachers acting as mentors. For each sub-code, the case study covers strengths, areas for improvement, suggestions, and overall implications/summaries. In all areas, the feedback was overwhelmingly positive regarding inductees’ skills in terms of pedagogy and their overall impact and effectiveness on their P-12 students. As mirrored in the employer and inductee surveys, classroom management, differentiation, and literacy skills emerged as areas in which inductees could use additional professional development and/or could have used additional or enhanced preparation while in their initial programs. Administrators noted, however, that these are all areas in which they see a need for improvement with veteran teachers as well, and they were not specific to completers from our EPP. In fact, literacy has become a state initiative, as this is a recognized need across the board.  As one administrator noted, “I mean, I feel like there’s not one university out there that actually teaches reading the way it’s supposed to be taught. I mean, just from my own personal experience.” With this in mind, we are developing a literacy pilot project as part of a state initiative to integrate additional reading content and/or courses earlier within our programs. Also, we have had one speaker provide professional development as related to dyslexia and techniques for early literacy and reading comprehension, and we are currently working on embedding content related to reading, comprehension, and dyslexia into our programs. Additionally, based on these interviews, we plan to add additional information to our programs regarding the development of Individualized Education Plans (IEPs) and behavior management plans. While all students receive this information, these were other areas in which the need for increased emphasis was noted. Our graduates were praised for their ability to build relationships with students, parents, community members, and colleagues, and their abilities to engage students in the classroom were highlighted throughout the interviews. As one administrator noted in regard to student engagement, “I think North Georgia is giving them the skills to do that and not line up 25 desks in a row and sit and teach in the front...They realize you’ve got to keep the kids engaged.” Interviews also highlighted completers’ use of technology as a teaching tool, as a data tool, and as a tool for inciting critical thinking. As one administrator stated of an inductee teacher, “I remember going in there, going, ‘Wow. It’s amazing!’ She uses technology not just as tool to make sure students are proficient with skills, but she is guiding them to create their own stuff. She has them doing electronic portfolios to share. It was just amazing to watch.” In looking at from inductees across programs, there were no major differences in terms of skills across the areas listed above, but again, the majority of comments for all areas were positive. In fact, most of the areas for improvement seemed to be geared to one particular inductee rather than emerging as patterns.

As seen here, we have multiple measures to demonstrate that our completers are effective teachers. Their performance as first-year teachers is strong, consistently scoring at a level 3 and sometimes 4 across all data measures. These measures illustrate both their impact on P-12 student learning and development and they are indicators of their teaching effectiveness.

Supporting Evidence:

3. Satisfaction of Employers and Employment Milestones (Component 4.3| A.4.1)

Measure 3. Satisfaction of Employers and Employment Milestones (Component 4.3| A.4.1)

The provider demonstrates that employers are satisfied with completers’ preparation and that completers reach employment milestones such as promotion and retention. 

The University of North Georgia’s Educator Preparation Provider (EPP) uses several measures to demonstrate “the impact of its completers on P-12 student learning and development, classroom instruction, and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation” (CAEP Standard 4). In terms of component 4.3, the goals are to document our employers’ satisfaction and our completers’ employment milestones.

Initial Programs

As an initial piece of evidence provided here to meet these aspects of the standard, we have data from surveys developed by the Georgia Professional Standards Commission (GaPSC) provided to the employers of first-year teachers (see “GaPSC Survey of Employers of Induction Teachers” in the supporting evidence below). The GaPSC sends a survey to each employer one year after the alumni have completed their initial certification and have been employed in a public P-12 school in Georgia. These surveys are based on InTASC standards, and they measure perceived comfort/skill regarding meeting these standards in the classroom (the questions have been provided in the evidence along with the data).

The results from these surveys provide us a means to triangulate data when we see trends and patterns across surveys and then via additional measures. This affords us a way to utilize multiple measures to determine where trends exist. While analysis of these surveys has been provided in the supporting evidence below, patterns that emerged from the surveys indicated that these first-year educators need additional training in the differentiation of instruction and classroom management. Moreover, the need for an increased focus on literacy was noted as a pattern. These three main areas were seen in employer and completer surveys, and they were emphasized via other data measures, which will be discussed below.

We chose to supplement and triangulate this data with a case study to demonstrate impact and effectiveness, for which we interviewed completers across all programs (37 inductee teachers/completers, 14 advanced completers), their employers (37 administrators), and veteran teachers who worked with inductees as mentors in their first or induction-level year (9 mentor teachers). The entire case study, including methodology, data, analysis, and interview protocols/questions, has been included here as evidence, but we will address some highlights here (see supporting evidence below). There were multiple codes that emerged from the interviews, including the following: (1) relationship building, (2) academic growth, (3) classroom management, (4) collaboration, (5) communication, (6) content knowledge, (7) data and assessment, (8) differentiation, (9) diversity, (10) field placement experience, (11) pedagogy, (12) professional dispositions, (13) social growth of P-12 students, and (14) technology. For each of these areas, the case study highlights sub-codes from inductee teachers, administrators, and veteran teachers acting as mentors. For each sub-code, the case study covers strengths, areas for improvement, suggestions, and overall implications/summaries. In all areas, the feedback was overwhelmingly positive regarding inductees’ skills in terms of pedagogy and their overall impact and effectiveness on their P-12 students. As mirrored in the employer and inductee surveys, classroom management, differentiation, and literacy skills emerged as areas in which inductees could use additional professional development and/or could have used additional or enhanced preparation while in their initial programs. Administrators noted, however, that these are all areas in which they see a need for improvement with veteran teachers as well, and they were not specific to completers from our EPP. In fact, literacy has become a state initiative, as this is a recognized need across the board.  As one administrator noted, “I mean, I feel like there’s not one university out there that actually teaches reading the way it’s supposed to be taught. I mean, just from my own personal experience.” With this in mind, we are developing a literacy pilot project as part of a state initiative to integrate additional reading content and/or courses earlier within our programs. Also, we have had one speaker provide professional development as related to dyslexia and techniques for early literacy and reading comprehension, and we are currently working on embedding content related to reading, comprehension, and dyslexia into our programs. Additionally, based on these interviews, we plan to add additional information to our programs regarding the development of Individualized Education Plans (IEPs) and behavior management plans. While all students receive this information, these were other areas in which the need for increased emphasis was noted.

Our graduates were praised for their ability to build relationships with students, parents, community members, and colleagues, and their abilities to engage students in the classroom were highlighted throughout the interviews. As one administrator noted in regard to student engagement, “I think North Georgia is giving them the skills to do that and not line up 25 desks in a row and sit and teach in the front. ... They realize you’ve got to keep the kids engaged.” Interviews also highlighted completers’ use of technology as a teaching tool, as a data tool, and as a tool for inciting critical thinking. As one administrator stated of an inductee teacher, “I remember going in there, going, ‘Wow. It’s amazing!’ She uses technology not just as tool to make sure students are proficient with skills, but she is guiding them to create their own stuff. She has them doing electronic portfolios to share. It was just amazing to watch.” In looking at inductees across programs, there were no major differences in terms of skills across the areas listed above, but again, the majority of comments for all areas were positive. In fact, most of the areas for improvement seemed to be geared to one particular inductee rather than emerging as patterns.

Overall, employers were very satisfied with the new teachers graduating from the University of North Georgia, as evidenced in our case study report and employer survey. This is reflected in the hiring and retention rates for our graduates and first-year teachers. Reports across the nation disagree in terms of teacher retention, but there are reports indicating that as many as 50% of teachers leave in the first five to seven years and 20% leave within the first one to three years (National Education Association, http://www.nea.org/tools/17054.htm). Other sources cite 40% of teachers leaving within the first five years (McLaughlin, 2018). As García and Weiss (2019) indicated, the number of college students selecting education as a major is dropping, as well, and this impacts the teacher pipeline. This directly affects EPPs, as many individuals now selecting education as a field are opting for alternative certification programs (García & Weiss, 2019).

We have managed to retain our numbers in our EPP, an effort of which we are extremely proud, given falling numbers across our state in EPPs. Additionally, our graduates tend to remain in the field of education at higher rates than those graduates of many of our sister institutions in the state of Georgia and when compared to national rates. As seen in our Employment and Retention Rates information in Measure 7, 84.4 percent of UNG graduates are still employed in the public-school sector after four years. Additionally, these numbers do not account for graduates who may move out of state or who may be teaching in the private sector. In looking at our retention rates by program, these are lower for areas such as art, music, and physical education, but this could be due to the availability of positions in these fields. As many schools may have only one or two available positions or, in some cases, may share one position across schools, positions in these fields are less in number, and many of our students take alternate employment until they can find a position teaching in their fields.

Advanced Programs

Our advanced level programs consist of our Master of Education in Curriculum and Instruction (C&I), the Tier I Educational Leadership Program, and the Tier II Educational Leadership program. As these are all relatively new programs, this means data is limited, especially in regard to completers. As seen in our “Advanced Programs and Completers” chart below, there are only two cycles of data available for both the C&I and Tier I program, and we have no data for completers for Tier II, as this program just started in the fall of 2019 and requires two years for completion.

Advanced-Level Programs and Completers

Program Program Length Program Beginning/End Dates First Year of Program First Year of Program Completers Years of Completers Data
Curriculum and Instruction 2 years
(6 semesters)
Summer/Spring 2016 2018 2
Tier I Educational Leadership 1 year
(4 semesters)
Summer/Summer 2017 2018 2
Tier II Educational Leadership 2 years
(7 semesters)
Fall/Summer 2019 2021 0

For our advanced programs, the EPP created surveys to ascertain employers' satisfaction with program completers. The GaPSC has not created employer surveys yet, so we have created our own. We send a survey to each employer one year after the alumni have completed their advanced program and have been employed in a public P-12 school in Georgia. These surveys are based on program-specific standards and state program requirements, and they measure perceived comfort/skill regarding meeting these standards in the classroom or at the school- or district-level (the questions have been provided in the supporting evidence along with the data). Please note that the EPP developed surveys collaboratively with the program faculty, Advisory Council, and Educational Leadership Task Force.

Curriculum and Instruction

The EPP-created survey for the employers of Curriculum and Instruction (C&I) program completers is given one year after program completion to better assess the program’s impact on candidates’ skills, dispositions, and professional behaviors. The C&I program first started in the summer of 2016, and it takes two years for candidates to complete. This means our 2018 graduates were the first group to complete this program, and thus the first set of completer surveys one year after completion were collected in the summer of 2019. At this point, then, we have two cycles of data due to the time at which the program started. During the development of this survey with program faculty, the Advisory Council reviewed the employer survey and provided feedback on draft questions and proposed additional questions. Their feedback was vital to this process, as many of the members are school- and district-level leaders, and they have a strong understanding of curriculum and instruction goals and needs. We developed our own surveys for completers and their employers because the Georgia Professional Standards Commission does not cover these completers in the surveys provided.

Our employer data for this program paints a bit of a different picture than what is shared by the completers in measure 4. The employer surveys indicate that they are pleased with the skills and professional behaviors that completers demonstrate after completing the M.Ed. in C&I, more so than the completers themselves. Indeed, of the employers responding, 96.15% indicated they either agreed or strongly agreed with how the program impacted the educators. There were only two instances where an employer indicated they neither agreed nor disagreed with the statements. Our C&I program faculty and administrators used this data to triangulate the results with the completer and induction surveys and the case study discussed in measure 4. Overall, the employer survey indicates the C&I program impacts candidates positively.

To reinforce this data, we receive data from the Georgia Department of Education (GaDOE) on our C&I completers as well (attached below). Again, we do not have three cycles of data yet because we have not had three years of completers who have been out of the program for at least one year. We may receive another set of data for last year's completers by the time of our site visit, although it could be limited due to COVID and the absence of state testing. Data demonstrated that all C&I completers scored at a level three on a four-point scale for their overall Teacher Effectiveness Measure (TEM), their Teacher Assessment on Performance Standards Rating (TAPs), and all student growth measures. Additionally, when examining the overall break down for TAPs scores, all completers scored at a three or higher (with an overall three average), with a smaller percentage of completers scoring at a level four (the highest possible score) in the areas of Professional Knowledge (1 completer), Instructional Strategies (3 completers), Differentiated Instruction (1 completer), Assessment Uses (1 completer), Positive Learning Environment (3 completers), Academically Challenging Environment (1 completer), Professionalism (2 completers), and Communication (1 completer). This demonstrates that our completers in the C&I program are performing effectively in their classrooms and that their employers agree, as they are the ones assessing their performance via the TAPs. 

Educational Leadership

The EDL Employer Survey was developed and administered for the first time in 2020. The survey included above is given one year after program completion to assess better the program's impact on candidates' skills and professional behaviors. The survey was developed to supplement the Georgia Professional Standards Commission's data, which currently only surveys program completers at the point of completion and one year after completion. Additionally, the program faculty and leadership collaborated to develop this survey according to the state standards, Georgia Educational Leadership Standards, and the national standards, the Professional Standards for Educational Leaders. In the first administration, we captured the feedback of administrators who employed our summer 2019 program completers of the Tier 1 Educational Leadership program. Because this will be lagging data and our first set of program completers finished in the summer of 2018, we only have one data cycle.

Thus far, the survey and the first set of results have been reviewed by program faculty. Our next steps will be to present the survey and results to the Educational Leadership Taskforce, consisting of leadership experts from our partnering school systems, to discuss the results and seek feedback on the survey. Their feedback is vital to this process, as the task force members are school- and district-level leaders. 

As seen in the survey results, they are overwhelmingly positive. The majority (98.72%) of responses indicated the employers either agreed or strongly agreed with the statements concerning the leaders' preparation. This means that the employers find our program completers to be well-prepared leaders for their positions and that they possess the skills and dispositions of leaders expected in the state and national standards. There were only two times when an employer indicate that they neither agreed nor disagreed with a statement. The responses to open-ended questions provided us with suggestions for currently relevant needs, such as leading in an online environment and a wider knowledge-base of distance education. There was also a comment that suggests our program should include additional research-based resources for leaders. The employers' feedback aligns with what we have found this far in our completion and induction surveys of former students. Still, these included additional and more specific suggestions for our program (discussed in measure 4).

To reinforce this data, we have data from the Georgia Department of Education (GaDOE) on new leaders graduating from our Tier I program a year prior. Currently, we have only one set of data, as we have not yet received the 2019-2020 results on 2018-2019 completers (attached below). We may have this data by the time of our site visit, although it will be limited due to COVID and the absence of state testing. As noted in the GaDOE Leader Keys Effectiveness System Handbook the LKES "is comprised of four components which contribute to an overall Leader Effectiveness Measure (LEM): Leaders Assessment on Performance Standards (LAPS), Student Growth, CCRPI School Climate Star Rating, and a Combination of Additional Data (Achievement Gap Reduction, Beat the Odds and CCRPI Data)" (p. 5). This is a similar evaluation system to the TKES mentioned for initial programs. These first-year leaders do not yet have enough data for a LEM. As indicated in the handbook, "Leaders employed less than 65% of the instructional days of their assigned school will receive a full LAPS evaluation and will receive a Summative Performance Evaluation. No LEM (Annual Evaluation) will be generated" (GaDOE LKES Handbook, p. 20); moreover, student growth measures are a lagging measure, thus impacting the ability to receive a LEM score the first year as a leader. As a whole, though, Tier I Educational Leadership completers received either a level 3 (87.5%) or a level 4 (12.5%) on the LAPS. They received level 3 scores on school growth ratings, and 85.71% scored a level 4, the highest possible rating, for the school/climate star rating. This rating is an indication of "how well a school is fostering an atmosphere where students feel welcomed, safe, and respected" (GaDOE LKES Handbook, p. 16).

On the standards reviewed and scored for the overall LAPS ratings, first-year leaders scored at a level 3 or 4 on all indicators, with the exception of one completer scoring a 2 in the category of Organizational Management. Several completers scored a level 4 in the following categories: Instructional Leadership (62.5%), School Climate (37.5%), Planning and Assessment (12.5%), Organizational Management (25%), Human Resources Management (12.5%), Teacher/Staff Evaluation (25%), Professionalism (50%0, and Communication and Community Relations (37.5%). As seen here, the data demonstrate the satisfaction of employers, as they are the ones completing the LAPS on their Tier I leaders. Data demonstrates that these new leaders are performing very well in their roles, even achieving level 4s on several indicators, which is unusual for first-year leaders. Moreover, the data demonstrate their effectiveness as leaders, in that they are having an overall positive impact on their schools, student achievement, student wellbeing, and on the logistics of leadership, including hiring, evaluation, human resources, and overall resource management. Lastly, these leaders are proving to be ethical leaders, who are scoring high in the category of professionalism and ethical behaviors. As noted previously, we do not yet have any Tier II Educational Leadership completers, as our first graduates will not finish the program until 2021.

4. Satisfaction of Completers (Components 4.4 | A.4.2)

Measure 4. Satisfaction of Completers (Components 4.4 | A.4.2)

The provider demonstrates that program completers perceive their preparation as relevant to the responsibilities they confront on the job, and that the preparation was effective.  

The University of North Georgia’s Educator Preparation Provider (EPP) uses several measures to demonstrate “the impact of its completers on P-12 student learning and development, classroom instruction, and schools, and the satisfaction of its completers with the relevance and effectiveness of their preparation” (CAEP Standard 4). In terms of component 4.4 and the satisfaction of completers, our evidence demonstrates that our graduates find their preparation to be relevant to and effective for their positions as educators and leaders. Below, we discuss our findings and evidence for initial programs and then advanced programs.

Initial Programs

The most significant piece of evidence to document completers’ satisfaction is our case study report. For this study, we interviewed 37 completers from across all programs to determine their impact on P-12 students' learning and development and also how their EPP prepared them for these positions and where they saw room for improvement.

Overwhelmingly, completers noted that they were satisfied with their preparation, and again, the major areas of improvement centered on additional preparation in terms of differentiation and classroom management and preparation regarding literacy. In particular, they noted that they were strong in terms of their abilities to forge relationships with students, parents, and colleagues. They also indicated that the level of field and clinical experiences prepared them to enter their new positions with increased confidence, and the level of experience provided beforehand allowed them to enter the classroom with a better understanding of the importance of (1) using data, (2) being flexible and (3) building relationships based on getting to know one's students.

This satisfaction was reflected, as well, in the inductee surveys provided by the GaPSC, which demonstrate overall confidence in planning, instruction, assessment, and professional behaviors (see Induction Teacher and Leader Surveys below). Again, these surveys mirror the case study, with an emphasis on differentiation and classroom management. In particular, inductees expressed the need for additional knowledge on working with the various learners in their classrooms, including students who are gifted, at-risk, diverse, English Language Learners, and individuals with special needs. While all of our students are required to have diverse placements, we plan to integrate more information into classes on differentiation, particularly as related to the diversity of learners.

In conclusion, we believe the satisfaction of completers is evidenced in our retention numbers, as related to the first years of teaching (reported in measure 7). The fact that our graduates tend to remain in the field is a testament to their strength as teachers and to their satisfaction with their employment and the preparation that helped them to get there. This is evidenced, too, in the fact that the majority of our advanced-level M.Ed. candidates and many of our Ed.L. candidates also attained their initial degrees at the University of North Georgia, and they return to advance their education and their careers.

Advanced Programs

Our advanced level programs consist of our Master of Education in Curriculum and Instruction (C&I), the Tier I Educational Leadership Program, and the Tier II Educational Leadership program. As these are all relatively new programs, this means data is limited, especially in regard to completers. As seen in our “Advanced Programs and Completers” chart below, there are only two cycles of data available for both the C&I and Tier I program, and we have no data for completers for Tier II, as this program just started in the fall of 2019 and requires two years for completion.

Advanced-Level Programs and Completers

Program Program Length Program Beginning/End Dates First Year of Program First Year of Program Completers Years of Completer Data
Curriculum and Instruction 2 years
(6 semesters)
Summer/Spring 2016 2018 2
Tier I Educational Leadership 1 year
(4 semesters)
Summer/Summer 2017 2018 2
Tier II Educational Leadership 2 years
(7 semesters)
Fall/Summer 2019 2021 0

To evidence our advanced level program completers’ satisfaction, we have included surveys from both the Georgia Professional Standards Commission (GaPSC) and the Educator Preparation Provider (EPP) given to completers one year after program completion. We wait one year after program completion because we want to have a better understanding of how the program impacts graduates' classroom and/or leadership skills. The GaPSC recently developed an Educational Leadership survey for candidates and completers, but they have not yet developed an employer survey. Additionally, they have not developed a survey for service fields, including Curriculum and Instruction, and this is one reason why we created our own. Please note that the EPP developed surveys were developed collaboratively with the program faculty, Advisory Council, and Educational Leadership Task Force. We have included the GaPSC surveys here, which provide data on both initial programs and leadership fields, and we have supplemented with our own surveys (results and analysis attached below). 

Curriculum and Instruction

Survey data from completers and employers for the Curriculum and Instruction (C&I) program is mixed. For the first year of completers, only three graduates responded, one of whom was not happy with the program or the outcomes. For the most part, the other two respondents were in agreement that the program had benefited and positively impacted them in their roles as educators, but three responses are not enough from which to draw generalizations or pinpoint patterns. The second year of completer data demonstrated much more positive responses in regard to the overall impact of the program on their skills as teacher leaders in the field of C&I. This data, along with the candidate surveys given at the end of their time in the program, is provided to faculty each year for review. 

We have made many changes to this program over the past three years in response to survey data, especially the survey provided at the end of the candidates' time in the program, as we have an almost 100% response rate on this survey. We have revised courses and key assessments, aligning all to the appropriate standards utilizing a CAEP checklist; we have created an orientation for C&I candidates; we have revised our website and program materials; we have revised our C&I Handbook; and we meet with faculty on a regular basis to ensure that everyone is on the same page and understands the direction of the program. These faculty meetings are key in terms of interweaving the same themes and strands throughout each course to ensure candidates understand the program's goals and see the value of the culminating action research project. Lastly and most significantly, we are currently revising the entire curriculum, coursework, and program of study. Most of these changes have been in response to student and completer feedback in our first three years of running this program. The modifications currently in progress are in response to this feedback, as well, and they are also in response to the current needs of advancing educators’ understanding of diversity, advanced teaching strategies in response to data, and technology integration. As noted in measure three, our employer data for this program paints a bit of a different picture. The employer surveys indicate that they are pleased with the skills and professional behaviors that completers demonstrate after having completed the M.Ed. in C&I, more so than the completers themselves. 

Because we had fewer years to collect data from our completers, we supplemented data from both completers and employers with our Case Study (attached below). In this study, we interviewed completers across all programs (37 inductee teachers/completers, 14 C&I completers), their employers (37 administrators), and veteran teachers who worked with inductees as mentors in their first or induction-level year (9 mentor teachers). The entire case study, including methodology, data, analysis, and interview protocols/questions, has been included here as evidence, but we will address some highlights here as related to C&I completers. Appendix B of this Case Study is devoted solely to the results of the C&I inductee and administrator interviews. There were multiple codes that emerged from the interviews, including the following: (1) relationship building, (2) academic growth, (3) classroom management, (4) collaboration, (5) communication, (6) content knowledge, (7) data and assessment, (8) differentiation, (9) diversity, (10) field placement experience, (11) pedagogy, (12) professional dispositions, (13) social growth of P-12 students, and (14) technology. For each of these areas, the case study highlights subcodes from inductee teachers, administrators, and veteran teachers acting as mentors. For each subcode, the case study covers strengths, areas for improvement, suggestions, and overall implications/summaries. In all areas, the feedback was overwhelmingly positive regarding inductees' skills in terms of pedagogy and their overall impact and effectiveness on their P-12 students. 

Overall, C&I inductees identified strengths in the areas of content knowledge, specifically research; in assessment and data use; in their ability to impact children's academic growth; in their ability to impact children's social growth; in regard to their pedagogical knowledge; and in terms of differentiation. Completers also identified strengths in regard to technology, diversity, and relationship building. Eight alumni indicated that content knowledge was the most impactful component of the C&I program on their current work. Completers indicated that they learned to become "critical consumers" of research, learning how to apply research to their own schools and classrooms via action research and learning how to assess programs and content based on the consumption and review of related research. Further, they learned how to apply data to improve their classrooms and instruction, which is another aspect of this research component. As one candidate noted, the application of data "shaped my teaching practice." The program also helped to expand their knowledge of differentiation, which as we have seen, is a difficult topic across programs and completers. As one completer noted, "Before I was differentiating for the sake of marking it on a paper; whereas once I learned how to analyze data, I was differentiating based on data." Areas for improvement in regard to the C&I program included content knowledge. While also the most impactful area, completers noted that they wanted less focus on research and more focus on C&I content. Classroom management, as with other programs, was pinpointed as an area in which additional focus was needed.

Based on our Case Study interviews, we have implemented multiple changes to the program. As indicated based on the survey results, we combined the Case Study and survey data to look for patterns. Many of the completers' desired changes focused on program logistics, which we have made great strides to address by creating a stronger orientation and revising the C&I Handbook. We have made improvements to the advising system so that students always have a faculty member to act as a mentor and advisor throughout the program. Supplementing survey and case study data with the course and assessment data, faculty have reviewed all key assessments, courses, and program objectives to ensure that they are aligned with the required standards. We have mapped the program to ensure that the course sequence builds upon a strong foundation. And lastly, faculty in the program meet regularly in both data days (twice per semester) and faculty meetings (once per month) to ensure that we have plenty of time to make changes and to ensure that everyone is on the same page with program changes, student monitoring, and daily program logistics. This consistent communication has been vital to program improvements.

To reinforce this data, we receive data from the Georgia Department of Education (GaDOE) on our C&I completers as well (attached below). Again, we do not have three cycles of data yet because we have not had three years of completers who have been out of the program for at least one year. We may receive another set of data for last year's completers by the time of our site visit, although it could be limited due to COVID and the absence of state testing. Data demonstrated that all C&I completers scored at a level three on a four-point scale for their overall Teacher Effectiveness Measure (TEM), their Teacher Assessment on Performance Standards Rating (TAPs), and all student growth measures. Additionally, when examining the overall break down for TAPs scores, all completers scored at a three or higher (with an overall three average), with a smaller percentage of completers scoring at a level four (the highest possible score) in the areas of Professional Knowledge (1 completer), Instructional Strategies (3 completers), Differentiated Instruction (1 completer), Assessment Uses (1 completer), Positive Learning Environment (3 completers), Academically Challenging Environment (1 completer), Professionalism (2 completers), and Communication (1 completer). This demonstrates that our completers in the C&I program are performing effectively in their classrooms and that their employers agree, as they are the ones assessing their performance via the TAPs. 

Tier I Educational Leadership

We completed a separate case study with our Tier I Educational Leadership completers, which is included as an appendix in the Case Study evidence attached below. For this study, we interviewed six completers to gain their perceptions of the program's impact one year after program completion. We wanted to have a better understanding of whether these new leaders felt prepared to enter into their first year in such positions, and we wanted to understand how the program affected their confidence and preparation and, with a bit of hindsight now, in what ways we could have improved. These interviews took place as focus groups, with each group meeting with program faculty for over an hour each. The interview protocol is also included as an appendix within the Case Study. 

What we found is that candidates felt their program had prepared them as much as possible; however, there was a sense of being "in-between." Completers weren't yet accepted as leaders, in some ways, at the same time that they were no longer teachers in the traditional sense. They also felt that they weren't quite confident in their roles as a result, they felt, in many ways, that they were in an "in-between" space. We decided to approach the study, then, from a framework of post-colonialism and borderlands or third space theories (Bhabha, 1994; Anzaldúa, 1998). 

Then, we met a third time to develop themes, which ended up centering on the following: a sense of self in the "in-between," transition of thinking in the "in-between," and complexity of the third space. What we discovered is that Educational Leadership programs need to focus on the methodology behind program development, thereby creating third spaces where candidates and completers can self-reflect and dialogue on how to exist in these in-between spaces. What we discovered is that these are not negative spaces, but they are necessary spaces for growth during the transition. Completers felt comfortable with the nuts and bolts of educational leadership, but it was in these transition spaces and the transition of thinking that they needed additional focus. 

Essentially, our findings indicated that new leaders need continued spaces in which to self-reflect on their roles as leaders within their institutions/organizations, and they need places in which to discuss these roles and the issues that they are facing. This methodological structure would add a much-needed layer of support to new leaders, much like the recommended induction-level support suggested by administrators for our new teachers. In these spaces, new leaders could continue to build their confidence and develop support structures necessary to help them transition into stronger leaders. On the whole, though, graduates were happy with the support provided in the program and felt as prepared as possible to enter into their new roles. 

To reinforce this data, we have data from the Georgia Department of Education (GaDOE) on new leaders graduating from our Tier I program a year prior. Currently, we have only one set of data, as we have not yet received the 2019-2020 results on 2018-2019 completers (attached below). We may have this data by the time of our site visit, although it will be limited due to COVID and the absence of state testing. As noted in the GaDOE Leader Keys Effectiveness System Handbook the LKES "is comprised of four components which contribute to an overall Leader Effectiveness Measure (LEM): Leaders Assessment on Performance Standards (LAPS), Student Growth, CCRPI School Climate Star Rating, and a Combination of Additional Data (Achievement Gap Reduction, Beat the Odds and CCRPI Data)" (p. 5). This is a similar evaluation system to the TKES mentioned for initial programs. These first-year leaders do not yet have enough data for a LEM. As indicated in the handbook, "Leaders employed less than 65% of the instructional days of their assigned school will receive a full LAPS evaluation and will receive a Summative Performance Evaluation. No LEM (Annual Evaluation) will be generated" (GaDOE LKES Handbook, p. 20); moreover, student growth measures are a lagging measure, thus impacting the ability to receive a LEM score the first year as a leader. As a whole, though, Tier I Educational Leadership completers received either a level 3 (87.5%) or a level 4 (12.5%) on the LAPS. They received level 3 scores on school growth ratings, and 85.71% scored a level 4, the highest possible rating, for the school/climate star rating. This rating is an indication of "how well a school is fostering an atmosphere where students feel welcomed, safe, and respected" (GaDOE LKES Handbook, p. 16).

On the standards reviewed and scored for the overall LAPS ratings, first-year leaders scored at a level 3 or 4 on all indicators, with the exception of one completer scoring a 2 in the category of Organizational Management. Several completers scored a level 4 in the following categories: Instructional Leadership (62.5%), School Climate (37.5%), Planning and Assessment (12.5%), Organizational Management (25%), Human Resources Management (12.5%), Teacher/Staff Evaluation (25%), Professionalism (50%0, and Communication and Community Relations (37.5%). As seen here, the data demonstrate the satisfaction of employers, as they are the ones completing the LAPS on their Tier I leaders. Data demonstrates that these new leaders are performing very well in their roles, even achieving level 4s on several indicators, which is unusual for first-year leaders. Moreover, the data demonstrate their effectiveness as leaders, in that they are having an overall positive impact on their schools, student achievement, student wellbeing, and on the logistics of leadership, including hiring, evaluation, human resources, and overall resource management. Lastly, these leaders are proving to be ethical leaders, who are scoring high in the category of professionalism and ethical behaviors. As noted previously, we do not yet have any Tier II Educational Leadership completers, as our first graduates will not finish the program until 2021.

5. Graduation Rates ( Initial & Advanced Levels)

Measure 5. Graduation Rates (Initial & Advanced Levels)

Initial Educator Preparation Program Completion Rates

Elementary and Special Education

Elementary and Special Education graduation rates displayed in a bar graph visual.

Cohort Entry Year Completion Rate Admitted Completed Withdrew
2015 - 2016 91.60% 119 109 10
2016 - 2017 84.80% 125 106 10
2017 - 2018 91.03% 145 132 13
2018 - 2019 83.22% 149 124 23
2019 - 2020 May 2021 159 TBD TBD

Middle Grades Education

Middle Grades Education graduation rates displayed in a bar graph visual.

Cohort Entry Year Completion Rate Admitted Completed Withdrew
2015 - 2016 75.00% 32 24 8
2016 - 2017 94.44% 18 17 1
2017 - 2018 90.91% 44 40 4
2018 - 2019 86.21% 29 25 4
2019 - 2020 May 2021 33 TBD TBD

Secondary Education (Biology, Chemistry, English, History, Mathematics, Physics)

Secondary Education graduation rates displayed in a bar graph visual.

Cohort Entry Year Completion Rate Admitted Completed Withdrew
2015 - 2016 96.00% 25 24 1
2016 - 2017 78.38% 37 29 8
2017 - 2018 88.24% 34 30 4
2018 - 2019 77.14% 35 27 5
2019 - 2020 May 2021 33 TBD TBD

P-12 Education (Art, Music, Kinesiology with Teacher Certification)

P-12 Education graduation rates displayed in a bar graph visual.

Cohort Entry Year Completion Rate Admitted Completed Withdrew
2015 - 2016 82.61% 23 19 4
2016 - 2017 75.00% 16 12 4
2017 - 2018 71.43% 28 20 8
2018 - 2019 73.08% 26 19 6
2019 - 2020 May 2021 21 TBD TBD

Post-Baccalaureate and MAT Initial Certification

Post-Baccalaureate and MAT initial certification graduation rates displayed in a bar graph visual.

Cohort Entry Year Completion Rate Admitted Completed Withdrew
2015 - 2016 95.24% 21 20 1
2016 - 2017 96.15% 26 25 1
2017 - 2018 95.24% 21 20 1
2018 - 2019 95.83% 24 23 1
2019 - 2020 May 2021 21 TBD TBD
Advanced Educator Preparation Program Completion Rates

Curriculum and Instruction (began summer 2016)

Curriculum and Instruction graduation rates displayed in a bar graph visual.

Cohort Entry Year Completion Rate Admitted Completed Withdrew
2015 - 2016 84.26% 13 11 2
2016 - 2017 80.00% 20 16 4
2017 - 2018 90.91% 11 10 1
2018 - 2019 May 2021 15 TBD TBD
2019 - 2020 May 2022 22 TBD TBD

Educational Leadership - Tier 1 (began summer 2017)

Educational Leadership - Tier 1 graduation rates displayed in a bar graph visual.

Cohort Entry Year Completion Rate Admitted Completed Withdrew
2016 - 2017 76.67% 30 23 7
2017 - 2018 88.00% 25 22 3
2018 - 2019 86.11% 36 31 5
2019 - 2020 August 2021 40 TBD  TBD

Educational Leadership - Tier 2 (began fall 2019)

Educational Leadership - Tier 2 graduation rates displayed in a bar graph visual.

Cohort Entry Year Completion Rate Admitted Completed Withdrew
2019 - 2020 100.00% 4 4 0
2020 - 2021 December 2021 9 TBD TBD
2021 - 2022 December 2022 TBD TBD TBD
6. Ability of Completers to Meet Licensing (Certification) and Any Additional State Requirements; Title II (Initial & Advanced Levels)

The University of North Georgia’s Educator Preparation Provider (EPP) uses several measures to demonstrate the ability of completers to meet licensing (certification) and state requirements. In this section, we present our Title II reporting and Georgia Professional Standards Commission Preparation Program Effectiveness Measures as supporting evidence.

Recently, the Georgia Professional Standards Commission (GaPSC) began publishing scores for Educator Preparation Providers (EPPs) across the state. The EPP as a whole is scored, as is each program (programs are combined for the aggregate score). The PPEM scores are based on in-program measures (50 percent), including the edTPA scores (30 percent) and the Georgia Assessments for the Certification of Educators (GACE) content scores (20 percent), and the other 50 percent is based on outcome measures that come into play once completers are in their first year of teaching. These outcome measures stem from the EPP completers' TAPS scores (30%), the GaPSC employer survey (10%), and the GaPSC inductee/first-year teacher survey (10%), all of which have been mentioned previously in this narrative. These five measures "generate one of four possible overall ratings for a provider or program. Level 3 is the expected standard of performance for programs and providers in the state, with about three in four providers earning this rating in prior years for which the calculation was modeled" (Georgia's TPPEM_Academic Year 2018-2019, GaPSC). The scores are calculated utilizing an aggregate of three years of data for all programs (Georgia's TPPEM_Academic Year 2018-2019, GaPSC). More information on how scores are calculated and the consequences of these scores have been included the supporting evidence below. As of the 2018-2019 academic year, these scores became consequential, meaning they have an impact on program approval. For the 2018-2019 year, our EPP scored a 4 overall, which is the highest possible score. This is the first and only year these scores have been published, although we should receive a rating for the 2019-2020 academic year this fall. Data have been included for the EPP as a whole and for each program (see supporting evidence below).

Supporting Evidence:

7. Ability of Completers to be Hired in Education Positions for Which they Have Prepared (Initial & Advanced Levels)

Employment Rates and Retention Rates One Year Post Graduation

As seen in the tables presented below, the majority of graduates from initial and advanced preparation programs at the University of North Georgia earn positions upon graduating and are employed one year after graduation in a public school in Georgia. It is important to note that this does not account for graduates who may leave the state of Georgia or who may find employment in the private school sector in Georgia. Unfortunately, we do not have a tracking mechanism for these students. Additionally, data for the Early Childhood Education (ECE) program are presented only for one year—this was an evening program that was discontinued after academic year 2015-2016. Similarly, for our advanced programs, the Educational Leadership program did not have graduates until 2017, and the first Curriculum and Instruction program cohort graduated in 2018. Therefore, there is no data to include for the years prior.

Programs that illustrate lower employment rates are to be expected, and these include history education, art education, music education, and physical education or kinesiology. There are fewer positions available in these fields, and thus these positions are more competitive and more difficult to attain across the state. For example, in the fields of art, music, and physical education, there tend to be only one or two positions at most elementary and middle schools. These areas are reflected, as well, in the Master of Arts in Teaching and Post Baccalaureate program, thus impacting these overall numbers in regard to overall employment.

According to the Georgia Department of Labor, the growth rate for elementary education, secondary education, middle grades education, and special education are average for the northeast Georgia region. Starting in the 2016-2017 academic year, we began hosting our own Teacher Recruitment Fair, in collaboration with the University of North Georgia’s Career Services, and we believe this has assisted our students in attaining positions for their first year. For this recruitment fair, we invite school districts from across the state to attend at no cost, and all graduating students are invited. Many of our students participate in interviews while at the Fair, and several leave with positions secured. In addition to the Fair, our students’ year-long field and clinical placements often help in securing positions. Many of our students become an integral part of their schools during this time, and administrators are able to see first-hand the kind of educators they will be. The fact that administrators and teachers get to know candidates very well can help graduates who are seeking a position at their placement schools.

Table 1
Academic Year 2018-2019 Graduates Teaching in 2019-2020

Program Program Level No Public School Position in First Year Employed in First Year
EPP All 18.25% 81.75%
Elementary & Special Education Initial 12.78% 87.22%
Middle Grades Initial 7.32% 92.68%
Biology Initial 0.00% 100.00%
English Initial 16.67% 83.33%
History Initial 25.00% 75.00%
Mathematics Initial 0.00% 100.00%
Art Initial 0.00% 100.00%
Music Initial 0.00% 100.00%
Kinesiology Initial 50.00% 50.00%
Post-Baccalaureate & MAT Initial 35.71% 64.29%
Curriculum and Instruction Advanced 0.00% 100.00%
Education Leadership Advanced 55.17% 44.83%

Table 2
Academic Year 2017-2018 Graduates Teaching in 2018-2019

Program Program Level No Public School Position in First Year Employed in First Year
EPP All 21.11% 78.89%
Elementary & Special Education Initial 13.59% 86.41%
Middle Grades Education Initial 13.33% 86.67%
Biology Education Initial 25.00% 75.00%
English Education Initial 25.00% 75.00%
History Education Initial 70.00% 30.00%
Mathematics Education Initial 16.67% 83.33%
Art Education Initial 0.00% 100.00%
Music Education Initial 50.00% 50.00%
Kinesiology Education Initial 50.00% 50.00%
Post-Baccalaureate & MAT Initial 18.75% 81.25%
Education Leadership Advanced 0.00% 100.00%

Table 3
Academic Year 2016-2017 Graduates Teaching in 2017-2018

Program Program Level No Public School Position in First Year Employed in First Year
EPP All 26.29% 75.3%
Elementary & Special Education Initial 23.68% 76.32%
Early Childhood Initial N/A N/A
Middle Grades Initial 19.23% 80.77%
Biology Initial 25.00% 75.00%
English Initial 33.33% 66.67%
History Initial 25.00% 75.00%
Mathematics Initial 27.27% 72.73%
Art Initial 42.86% 57.14%
Music Initial 33.33% 66.67%
Kinesiology Initial 53.33% 46.67%
Post-Baccalaureate & MAT Initial 25.00% 75.00%

Table 4
Academic Year 2015-2016 Graduates Teaching in 2016-2017

Program Program Level No Public School Position in First Year Employed in First Year
EPP All 25.11% 74.89%
Elementary & Special Education Initial 13.08% 86.92%
Early Childhood Initial 6.25% 93.75%
Middle Grades Initial 26.67% 73.33%
Biology Initial 33.33% 66.67%
English Initial 27.27% 72.73%
History Initial 44.44% 55.56%
Mathematics Initial  16.67% 83.33%
Art Initial 50.00% 50.00%
Music Initial 80.00% 20.00%
Kinesiology Initial 64.29% 35.71%
Post-Baccalaurate & MAT Initial 61.54% 30.77%

Additional Employment Information

As recently published by the Georgia Professional Standards Commission (GaPSC), the five-year retention rate for graduates in the field of education who are hired into teaching positions and are still employed by the school systems is 73.7% (Georgia Teacher Supply and Retention, 2019). The majority of those, 67.5%, are still employed in a teaching role, while 2.9% are in administrative roles and 3.4% are employed in other roles. The GaPSC also published the four-year rate by institution in this same report, and they indicated that the University of North Georgia’s four-year teacher retention rate is 84.4% of a total of 398. Although this retention rate is very high, especially when compared with our counterparts across the state and across the nation, there is always room for improvement. *Please note this this was a special report provided by the state and a new version has not been provided. As a result, we do not have a mechanism to update this information until new information is provided to us.

8. Student Loan Default Rates and Other Consumer Information (Initial & Advanced Levels)

Measure 8. Student Loan Default Rates and Other Consumer Information (Initial & Advanced Levels)

Supporting Evidence:

Program Quality

Information on education salaries in Georgia and GaPSC tiered certification

UNG follows Section 508 Standards and WCAG 2.0 for web accessibility. If you require the content on this web page in another format, please contact the ADA Coordinator.
Please note that some of the images and videos on our site may have been taken before social distancing, face coverings and restricted gatherings were required.

Back to Top