This report provides a summary of the research and related activities performed. The issue of trust is one of the most significant obstacles to broad use of autonomy technology by DoD and other agencies. The impact of this research supports creation of computational methods that create a bridge to engineering of trustworthy autonomous systems. Objectives of this research were (1) to operationalize the quality of benevolence and understand how it contributes to well-calibrated trust of, and reliance upon, autonomous systems, (2) to investigate portrayal of trust-related attributes in the humanmachine interface. Accomplishments include: (1) the formulation of benevolence as a complex belief structure with antecedent beliefs having semantic, temporal, causal and other interrelationships; (2) the mapping of a portion of this belief structure to measurable internal states of autonomous systems; (3) empirical evidence in support of the applicability of psychological concepts of interpersonal human trust to autonomous systems, including the role of personality and situation in modulating the role and importance of certain beliefs; (4) creation of a theory and engineering of a prototype Human Social Interface for machine portrayal of trust qualities in human-machine social interaction.